Test Report: Docker_Linux_crio_arm64 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (5/338)

Order failed test Duration
33 TestAddons/parallel/Registry 74.75
34 TestAddons/parallel/Ingress 151.15
36 TestAddons/parallel/MetricsServer 306.8
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.68
174 TestMultiControlPlane/serial/RestartCluster 125.1
x
+
TestAddons/parallel/Registry (74.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.128087ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004550867s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004638349s
addons_test.go:342: (dbg) Run:  kubectl --context addons-926553 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-926553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-926553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.111483017s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-926553 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 ip
2024/08/31 22:45:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-926553
helpers_test.go:236: (dbg) docker inspect addons-926553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a",
	        "Created": "2024-08-31T22:32:58.142499264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:32:58.286853851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hosts",
	        "LogPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a-json.log",
	        "Name": "/addons-926553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-926553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-926553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424-init/diff:/var/lib/docker/overlay2/b65bd3df822a42b081e949f262147909a06a528615f1ebee5ca341285d3e7159/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-926553",
	                "Source": "/var/lib/docker/volumes/addons-926553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-926553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-926553",
	                "name.minikube.sigs.k8s.io": "addons-926553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "299f7cd903653354b274e148f6cb6a39ed6942891df3e3272bc94377e3fd800f",
	            "SandboxKey": "/var/run/docker/netns/299f7cd90365",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-926553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7a8828e69332b37e7bad00ea7f7da101018d986bdcdd9608e22ba654914df386",
	                    "EndpointID": "f81499bc432f0db4a48aaa2f7a33d2bce9def00a9f596d90ba418160f18b3dd7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-926553",
	                        "2b41c4e07f7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-926553 -n addons-926553
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 logs -n 25: (1.924737552s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-847558   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | -p download-only-847558                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-847558                                                                     | download-only-847558   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | -o=json --download-only                                                                     | download-only-030884   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | -p download-only-030884                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-030884                                                                     | download-only-030884   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-847558                                                                     | download-only-847558   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-030884                                                                     | download-only-030884   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | download-docker-718632                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-718632                                                                   | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | binary-mirror-123480                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44745                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-123480                                                                     | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-926553 --wait=true                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | -p addons-926553                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-926553 ssh cat                                                                       | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | /opt/local-path-provisioner/pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-926553 ip                                                                            | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:32:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:32:33.055573  283957 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:33.055738  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.055749  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:33.055754  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.056034  283957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:32:33.056594  283957 out.go:352] Setting JSON to false
	I0831 22:32:33.057655  283957 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8101,"bootTime":1725135452,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:32:33.057748  283957 start.go:139] virtualization:  
	I0831 22:32:33.061311  283957 out.go:177] * [addons-926553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:32:33.065254  283957 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:32:33.065416  283957 notify.go:220] Checking for updates...
	I0831 22:32:33.070822  283957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:32:33.074065  283957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:32:33.076774  283957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:32:33.079454  283957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:32:33.082232  283957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:32:33.085445  283957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:32:33.116782  283957 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:32:33.116914  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.173707  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.16402705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.173832  283957 docker.go:307] overlay module found
	I0831 22:32:33.176642  283957 out.go:177] * Using the docker driver based on user configuration
	I0831 22:32:33.179170  283957 start.go:297] selected driver: docker
	I0831 22:32:33.179214  283957 start.go:901] validating driver "docker" against <nil>
	I0831 22:32:33.179232  283957 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:32:33.179877  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.244492  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.235116551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.244664  283957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:32:33.244891  283957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:32:33.247588  283957 out.go:177] * Using Docker driver with root privileges
	I0831 22:32:33.250073  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:32:33.250100  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:32:33.250112  283957 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:32:33.250206  283957 start.go:340] cluster config:
	{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:32:33.253061  283957 out.go:177] * Starting "addons-926553" primary control-plane node in "addons-926553" cluster
	I0831 22:32:33.255456  283957 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 22:32:33.258049  283957 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:32:33.260597  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:33.260655  283957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:33.260667  283957 cache.go:56] Caching tarball of preloaded images
	I0831 22:32:33.260691  283957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:32:33.260749  283957 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 22:32:33.260760  283957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:32:33.261148  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:32:33.261182  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json: {Name:mkdfcbbb034ebf13d0c934d3b8bb6283f2353c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:32:33.276646  283957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:33.276792  283957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:32:33.276818  283957 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:32:33.276823  283957 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:32:33.276832  283957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:32:33.276842  283957 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:32:50.926792  283957 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:32:50.926833  283957 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:32:50.926891  283957 start.go:360] acquireMachinesLock for addons-926553: {Name:mk45b5d2bdf6c02f40299229aa5af77faafa98b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:32:50.927022  283957 start.go:364] duration metric: took 106.732µs to acquireMachinesLock for "addons-926553"
	I0831 22:32:50.927053  283957 start.go:93] Provisioning new machine with config: &{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:32:50.927149  283957 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:32:50.929291  283957 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:32:50.929542  283957 start.go:159] libmachine.API.Create for "addons-926553" (driver="docker")
	I0831 22:32:50.929577  283957 client.go:168] LocalClient.Create starting
	I0831 22:32:50.929688  283957 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem
	I0831 22:32:51.568232  283957 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem
	I0831 22:32:51.959805  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:32:51.976476  283957 cli_runner.go:211] docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:32:51.976564  283957 network_create.go:284] running [docker network inspect addons-926553] to gather additional debugging logs...
	I0831 22:32:51.976587  283957 cli_runner.go:164] Run: docker network inspect addons-926553
	W0831 22:32:51.998246  283957 cli_runner.go:211] docker network inspect addons-926553 returned with exit code 1
	I0831 22:32:51.998286  283957 network_create.go:287] error running [docker network inspect addons-926553]: docker network inspect addons-926553: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-926553 not found
	I0831 22:32:51.998301  283957 network_create.go:289] output of [docker network inspect addons-926553]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-926553 not found
	
	** /stderr **
	I0831 22:32:51.998418  283957 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:32:52.020066  283957 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017aa870}
	I0831 22:32:52.020113  283957 network_create.go:124] attempt to create docker network addons-926553 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:32:52.020180  283957 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-926553 addons-926553
	I0831 22:32:52.103358  283957 network_create.go:108] docker network addons-926553 192.168.49.0/24 created
	I0831 22:32:52.103398  283957 kic.go:121] calculated static IP "192.168.49.2" for the "addons-926553" container
	I0831 22:32:52.103481  283957 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:32:52.117925  283957 cli_runner.go:164] Run: docker volume create addons-926553 --label name.minikube.sigs.k8s.io=addons-926553 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:32:52.134920  283957 oci.go:103] Successfully created a docker volume addons-926553
	I0831 22:32:52.135011  283957 cli_runner.go:164] Run: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:32:53.917914  283957 cli_runner.go:217] Completed: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (1.78286744s)
	I0831 22:32:53.917946  283957 oci.go:107] Successfully prepared a docker volume addons-926553
	I0831 22:32:53.917968  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:53.917988  283957 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:32:53.918085  283957 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:32:58.069694  283957 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.151551571s)
	I0831 22:32:58.069731  283957 kic.go:203] duration metric: took 4.15173909s to extract preloaded images to volume ...
	W0831 22:32:58.069874  283957 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:32:58.069992  283957 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:32:58.127293  283957 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-926553 --name addons-926553 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-926553 --network addons-926553 --ip 192.168.49.2 --volume addons-926553:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:32:58.451756  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Running}}
	I0831 22:32:58.471081  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.493141  283957 cli_runner.go:164] Run: docker exec addons-926553 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:32:58.579570  283957 oci.go:144] the created container "addons-926553" has a running status.
	I0831 22:32:58.579597  283957 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa...
	I0831 22:32:58.856139  283957 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:32:58.888353  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.918856  283957 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:32:58.918881  283957 kic_runner.go:114] Args: [docker exec --privileged addons-926553 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:32:58.994745  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:59.020659  283957 machine.go:93] provisionDockerMachine start ...
	I0831 22:32:59.020755  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:32:59.042776  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:32:59.043049  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:32:59.043065  283957 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:32:59.043777  283957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 22:33:02.183965  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.183992  283957 ubuntu.go:169] provisioning hostname "addons-926553"
	I0831 22:33:02.184057  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.201134  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.201387  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.201404  283957 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-926553 && echo "addons-926553" | sudo tee /etc/hostname
	I0831 22:33:02.349789  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.349888  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.372048  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.372306  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.372323  283957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-926553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-926553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-926553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:33:02.504705  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:33:02.504736  283957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 22:33:02.504768  283957 ubuntu.go:177] setting up certificates
	I0831 22:33:02.504779  283957 provision.go:84] configureAuth start
	I0831 22:33:02.504849  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:02.523280  283957 provision.go:143] copyHostCerts
	I0831 22:33:02.523372  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 22:33:02.523504  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 22:33:02.523567  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 22:33:02.523620  283957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.addons-926553 san=[127.0.0.1 192.168.49.2 addons-926553 localhost minikube]
	I0831 22:33:02.933713  283957 provision.go:177] copyRemoteCerts
	I0831 22:33:02.933792  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:33:02.933842  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.950418  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.053745  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:33:03.085010  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:33:03.111911  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:33:03.138695  283957 provision.go:87] duration metric: took 633.893833ms to configureAuth
	I0831 22:33:03.138724  283957 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:33:03.138976  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:03.139098  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.157231  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:03.157489  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:03.157510  283957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:33:03.395474  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:33:03.395500  283957 machine.go:96] duration metric: took 4.374820866s to provisionDockerMachine
	I0831 22:33:03.395511  283957 client.go:171] duration metric: took 12.46592371s to LocalClient.Create
	I0831 22:33:03.395523  283957 start.go:167] duration metric: took 12.465982753s to libmachine.API.Create "addons-926553"
	I0831 22:33:03.395532  283957 start.go:293] postStartSetup for "addons-926553" (driver="docker")
	I0831 22:33:03.395543  283957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:33:03.395618  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:33:03.395665  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.414120  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.513743  283957 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:33:03.517073  283957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:33:03.517108  283957 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:33:03.517137  283957 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:33:03.517155  283957 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:33:03.517165  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 22:33:03.517246  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 22:33:03.517272  283957 start.go:296] duration metric: took 121.734053ms for postStartSetup
	I0831 22:33:03.517586  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.539317  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:33:03.539619  283957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:33:03.539672  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.556680  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.650277  283957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:33:03.654747  283957 start.go:128] duration metric: took 12.727579827s to createHost
	I0831 22:33:03.654772  283957 start.go:83] releasing machines lock for "addons-926553", held for 12.727737422s
	I0831 22:33:03.654860  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.672628  283957 ssh_runner.go:195] Run: cat /version.json
	I0831 22:33:03.672710  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.673358  283957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:33:03.673442  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.697266  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.710029  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.795932  283957 ssh_runner.go:195] Run: systemctl --version
	I0831 22:33:03.930195  283957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:33:04.071340  283957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:33:04.075814  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.099545  283957 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:33:04.099629  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.136429  283957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:33:04.136452  283957 start.go:495] detecting cgroup driver to use...
	I0831 22:33:04.136490  283957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:33:04.136563  283957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:33:04.152782  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:33:04.164726  283957 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:33:04.164790  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:33:04.179068  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:33:04.193725  283957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:33:04.288369  283957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:33:04.384337  283957 docker.go:233] disabling docker service ...
	I0831 22:33:04.384478  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:33:04.405127  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:33:04.417339  283957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:33:04.502240  283957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:33:04.591263  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:33:04.604121  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:33:04.621501  283957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:33:04.621615  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.632529  283957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:33:04.632622  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.642518  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.652512  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.663605  283957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:33:04.672528  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.682613  283957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.698852  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.708709  283957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:33:04.716981  283957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:33:04.725394  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:04.831046  283957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:33:04.953766  283957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:33:04.953873  283957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:33:04.958520  283957 start.go:563] Will wait 60s for crictl version
	I0831 22:33:04.958584  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:33:04.962128  283957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:33:04.997059  283957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 22:33:04.997167  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.045856  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.092004  283957 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 22:33:05.094977  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:33:05.112048  283957 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:33:05.116110  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.128026  283957 kubeadm.go:883] updating cluster {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:33:05.128170  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:33:05.128234  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.208377  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.208421  283957 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:33:05.208479  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.246065  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.246089  283957 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:33:05.246099  283957 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0831 22:33:05.246205  283957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-926553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:33:05.246297  283957 ssh_runner.go:195] Run: crio config
	I0831 22:33:05.292734  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:05.292759  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:05.292771  283957 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:33:05.292794  283957 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-926553 NodeName:addons-926553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:33:05.293025  283957 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-926553"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:33:05.293106  283957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:33:05.302182  283957 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:33:05.302257  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:33:05.311092  283957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 22:33:05.329236  283957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:33:05.347791  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0831 22:33:05.366848  283957 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:33:05.370373  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.381457  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:05.465768  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:05.479694  283957 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553 for IP: 192.168.49.2
	I0831 22:33:05.479717  283957 certs.go:194] generating shared ca certs ...
	I0831 22:33:05.479733  283957 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:05.479864  283957 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 22:33:06.370705  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt ...
	I0831 22:33:06.370800  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt: {Name:mk127fa4684d9b07fbbfe78fd379ac7f2858784d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.371022  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key ...
	I0831 22:33:06.371065  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key: {Name:mkaa1c85c29bc9b8e67687de42c28210df6897ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.372603  283957 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 22:33:06.601904  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt ...
	I0831 22:33:06.601936  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt: {Name:mkdc81b529896f489764dcced8efa122bc80e6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602125  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key ...
	I0831 22:33:06.602138  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key: {Name:mkd36c32182ba675bb26d2d1c2420f0531884885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602761  283957 certs.go:256] generating profile certs ...
	I0831 22:33:06.602831  283957 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key
	I0831 22:33:06.602851  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt with IP's: []
	I0831 22:33:07.200696  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt ...
	I0831 22:33:07.200743  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: {Name:mk55d73b23a418e158fddd2a2029982fed955c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.200943  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key ...
	I0831 22:33:07.200989  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key: {Name:mk59a6767b126a801e3c15dd1fd3a3348aa14ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.201084  283957 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3
	I0831 22:33:07.201105  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:33:07.643963  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 ...
	I0831 22:33:07.643994  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3: {Name:mk8845045369642c2652f6024489c05d54865b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644178  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 ...
	I0831 22:33:07.644191  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3: {Name:mk69db76c63a333ce273b6b1150f927c3534bc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644723  283957 certs.go:381] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt
	I0831 22:33:07.644822  283957 certs.go:385] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key
	I0831 22:33:07.644885  283957 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key
	I0831 22:33:07.644904  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt with IP's: []
	I0831 22:33:07.769112  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt ...
	I0831 22:33:07.769146  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt: {Name:mk709a4df7e86ad0190ea4e7918008cb10101a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769717  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key ...
	I0831 22:33:07.769737  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key: {Name:mk55ab13960a2f23e6e30c97ac70318ef038cdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769938  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 22:33:07.769982  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:33:07.770019  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:33:07.770046  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 22:33:07.770668  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:33:07.796259  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:33:07.828503  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:33:07.867326  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:33:07.892900  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:33:07.917006  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:33:07.941026  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:33:07.964770  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:33:07.989226  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:33:08.021885  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:33:08.053952  283957 ssh_runner.go:195] Run: openssl version
	I0831 22:33:08.060101  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:33:08.070747  283957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074388  283957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074466  283957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.082225  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:33:08.092117  283957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:33:08.095591  283957 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:33:08.095645  283957 kubeadm.go:392] StartCluster: {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:33:08.095732  283957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:33:08.095788  283957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:33:08.141952  283957 cri.go:89] found id: ""
	I0831 22:33:08.142024  283957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:33:08.151170  283957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:33:08.160571  283957 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:33:08.160636  283957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:33:08.169922  283957 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:33:08.169943  283957 kubeadm.go:157] found existing configuration files:
	
	I0831 22:33:08.170003  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:33:08.178997  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:33:08.179084  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:33:08.187643  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:33:08.196349  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:33:08.196437  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:33:08.205030  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.213907  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:33:08.213994  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.222476  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:33:08.231658  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:33:08.231726  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:33:08.240283  283957 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:33:08.279889  283957 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:33:08.280060  283957 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:33:08.302891  283957 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:33:08.302989  283957 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0831 22:33:08.303047  283957 kubeadm.go:310] OS: Linux
	I0831 22:33:08.303109  283957 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:33:08.303175  283957 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:33:08.303241  283957 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:33:08.303307  283957 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:33:08.303382  283957 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:33:08.303472  283957 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:33:08.303576  283957 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:33:08.303659  283957 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:33:08.303742  283957 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:33:08.375106  283957 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:33:08.375280  283957 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:33:08.375404  283957 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:33:08.381947  283957 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:33:08.385255  283957 out.go:235]   - Generating certificates and keys ...
	I0831 22:33:08.385428  283957 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:33:08.385523  283957 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:33:08.637437  283957 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:33:09.463131  283957 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:33:10.033346  283957 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:33:10.906857  283957 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:33:11.453764  283957 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:33:11.454108  283957 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.062393  283957 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:33:12.062743  283957 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.309286  283957 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:33:12.573925  283957 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:33:12.914344  283957 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:33:12.914632  283957 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:33:13.308464  283957 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:33:13.644764  283957 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:33:14.238434  283957 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:33:14.678365  283957 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:33:15.169684  283957 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:33:15.170746  283957 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:33:15.174253  283957 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:33:15.177263  283957 out.go:235]   - Booting up control plane ...
	I0831 22:33:15.177380  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:33:15.177460  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:33:15.178516  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:33:15.190024  283957 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:33:15.196959  283957 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:33:15.197061  283957 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:33:15.294087  283957 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:33:15.294208  283957 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:33:16.295207  283957 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00118568s
	I0831 22:33:16.295299  283957 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:33:22.297225  283957 kubeadm.go:310] [api-check] The API server is healthy after 6.002301756s
	I0831 22:33:22.317717  283957 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:33:22.333223  283957 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:33:22.356793  283957 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:33:22.356989  283957 kubeadm.go:310] [mark-control-plane] Marking the node addons-926553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:33:22.368934  283957 kubeadm.go:310] [bootstrap-token] Using token: bpizuk.5bt7ue9fr9w4aczf
	I0831 22:33:22.373429  283957 out.go:235]   - Configuring RBAC rules ...
	I0831 22:33:22.373568  283957 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:33:22.379902  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:33:22.391608  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:33:22.397570  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:33:22.401429  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:33:22.405725  283957 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:33:22.704690  283957 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:33:23.180935  283957 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:33:23.704316  283957 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:33:23.707745  283957 kubeadm.go:310] 
	I0831 22:33:23.707828  283957 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:33:23.707837  283957 kubeadm.go:310] 
	I0831 22:33:23.707924  283957 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:33:23.707936  283957 kubeadm.go:310] 
	I0831 22:33:23.707962  283957 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:33:23.708048  283957 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:33:23.708128  283957 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:33:23.708138  283957 kubeadm.go:310] 
	I0831 22:33:23.708191  283957 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:33:23.708200  283957 kubeadm.go:310] 
	I0831 22:33:23.708251  283957 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:33:23.708259  283957 kubeadm.go:310] 
	I0831 22:33:23.708311  283957 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:33:23.708384  283957 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:33:23.708476  283957 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:33:23.708490  283957 kubeadm.go:310] 
	I0831 22:33:23.708572  283957 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:33:23.708648  283957 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:33:23.708655  283957 kubeadm.go:310] 
	I0831 22:33:23.708737  283957 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.708860  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c \
	I0831 22:33:23.708888  283957 kubeadm.go:310] 	--control-plane 
	I0831 22:33:23.708893  283957 kubeadm.go:310] 
	I0831 22:33:23.708977  283957 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:33:23.708982  283957 kubeadm.go:310] 
	I0831 22:33:23.709068  283957 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.709169  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c 
	I0831 22:33:23.712617  283957 kubeadm.go:310] W0831 22:33:08.276569    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.712923  283957 kubeadm.go:310] W0831 22:33:08.277503    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.713163  283957 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0831 22:33:23.713299  283957 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:33:23.713314  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:23.713322  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:23.716282  283957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:33:23.719220  283957 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:33:23.723271  283957 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:33:23.723293  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:33:23.741607  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:33:24.052823  283957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:33:24.052918  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.052970  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-926553 minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-926553 minikube.k8s.io/primary=true
	I0831 22:33:24.230141  283957 ops.go:34] apiserver oom_adj: -16
	I0831 22:33:24.230269  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.730397  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.230993  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.730610  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.230407  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.730761  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.231064  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.730886  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.230560  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.321873  283957 kubeadm.go:1113] duration metric: took 4.26902395s to wait for elevateKubeSystemPrivileges
	I0831 22:33:28.321901  283957 kubeadm.go:394] duration metric: took 20.226260277s to StartCluster
	I0831 22:33:28.321917  283957 settings.go:142] acquiring lock: {Name:mkadbc7d53c5858a38d57ec152e52037ebee242b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.322035  283957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:33:28.322400  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.323046  283957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:33:28.323174  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:33:28.323438  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.323475  283957 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:33:28.323555  283957 addons.go:69] Setting yakd=true in profile "addons-926553"
	I0831 22:33:28.323574  283957 addons.go:234] Setting addon yakd=true in "addons-926553"
	I0831 22:33:28.323597  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.324068  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.324839  283957 addons.go:69] Setting cloud-spanner=true in profile "addons-926553"
	I0831 22:33:28.324866  283957 addons.go:234] Setting addon cloud-spanner=true in "addons-926553"
	I0831 22:33:28.324890  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.325338  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.325583  283957 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-926553"
	I0831 22:33:28.325617  283957 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-926553"
	I0831 22:33:28.325650  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326088  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.326414  283957 addons.go:69] Setting registry=true in profile "addons-926553"
	I0831 22:33:28.326440  283957 addons.go:234] Setting addon registry=true in "addons-926553"
	I0831 22:33:28.326465  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326854  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.329500  283957 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-926553"
	I0831 22:33:28.329573  283957 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:28.329606  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.330028  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342354  283957 addons.go:69] Setting default-storageclass=true in profile "addons-926553"
	I0831 22:33:28.342397  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-926553"
	I0831 22:33:28.342712  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342891  283957 addons.go:69] Setting storage-provisioner=true in profile "addons-926553"
	I0831 22:33:28.342929  283957 addons.go:234] Setting addon storage-provisioner=true in "addons-926553"
	I0831 22:33:28.342990  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.349869  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.360101  283957 addons.go:69] Setting gcp-auth=true in profile "addons-926553"
	I0831 22:33:28.360166  283957 mustload.go:65] Loading cluster: addons-926553
	I0831 22:33:28.360443  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.360907  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.366186  283957 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-926553"
	I0831 22:33:28.366367  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-926553"
	I0831 22:33:28.366876  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.375213  283957 addons.go:69] Setting ingress=true in profile "addons-926553"
	I0831 22:33:28.375277  283957 addons.go:234] Setting addon ingress=true in "addons-926553"
	I0831 22:33:28.375340  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.376302  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.380625  283957 addons.go:69] Setting volcano=true in profile "addons-926553"
	I0831 22:33:28.380724  283957 addons.go:234] Setting addon volcano=true in "addons-926553"
	I0831 22:33:28.380800  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.381420  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.394994  283957 addons.go:69] Setting ingress-dns=true in profile "addons-926553"
	I0831 22:33:28.395035  283957 addons.go:234] Setting addon ingress-dns=true in "addons-926553"
	I0831 22:33:28.395105  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.395705  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.401028  283957 addons.go:69] Setting volumesnapshots=true in profile "addons-926553"
	I0831 22:33:28.401089  283957 addons.go:234] Setting addon volumesnapshots=true in "addons-926553"
	I0831 22:33:28.401140  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.401758  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.402549  283957 out.go:177] * Verifying Kubernetes components...
	I0831 22:33:28.428671  283957 addons.go:69] Setting inspektor-gadget=true in profile "addons-926553"
	I0831 22:33:28.428730  283957 addons.go:234] Setting addon inspektor-gadget=true in "addons-926553"
	I0831 22:33:28.428784  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.429708  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.482979  283957 addons.go:69] Setting metrics-server=true in profile "addons-926553"
	I0831 22:33:28.483022  283957 addons.go:234] Setting addon metrics-server=true in "addons-926553"
	I0831 22:33:28.483067  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.483527  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.544912  283957 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:33:28.556676  283957 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:33:28.590337  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:28.592737  283957 addons.go:234] Setting addon default-storageclass=true in "addons-926553"
	I0831 22:33:28.592814  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.593533  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.602703  283957 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:33:28.613912  283957 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:33:28.616792  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:33:28.616842  283957 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:33:28.616938  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.642744  283957 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:33:28.642778  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:33:28.642878  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.643214  283957 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:33:28.643541  283957 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:28.643555  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:33:28.643629  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.673321  283957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:33:28.673653  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:33:28.676085  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:28.676117  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:33:28.676204  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.680157  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:33:28.682935  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0831 22:33:28.685152  283957 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:33:28.688167  283957 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:28.688191  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:33:28.688265  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.711171  283957 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:33:28.711327  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:33:28.711525  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:33:28.713867  283957 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:33:28.713897  283957 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:33:28.714009  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.716525  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:33:28.716567  283957 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:33:28.716656  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.730720  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:33:28.736851  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.740033  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.742712  283957 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:33:28.743079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:28.743093  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:33:28.743174  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.743485  283957 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:33:28.743695  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.746617  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:33:28.746887  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:33:28.746921  283957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:33:28.746978  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.780079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:28.780109  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:33:28.780197  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.789453  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:33:28.790925  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:33:28.793675  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:33:28.799676  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:33:28.803611  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:33:28.803643  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:33:28.803743  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.812459  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.863289  283957 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-926553"
	I0831 22:33:28.863352  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.863920  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.868300  283957 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:28.868325  283957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:33:28.868620  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.883317  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.949248  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.960803  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.002979  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.003648  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.046634  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.047268  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.055178  283957 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:33:29.055568  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.061509  283957 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:33:29.064558  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.064583  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:33:29.064648  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:29.066321  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.088665  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.089600  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.108712  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.442641  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:33:29.442677  283957 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:33:29.526255  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:29.530947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:29.533064  283957 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:33:29.533105  283957 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:33:29.534377  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:29.596562  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:33:29.596599  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:33:29.613705  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:29.630607  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:29.647426  283957 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:33:29.647458  283957 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:33:29.653219  283957 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.653263  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:33:29.657539  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:29.660517  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:33:29.660568  283957 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:33:29.663695  283957 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.073312217s)
	I0831 22:33:29.663842  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:29.666078  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.710627  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:33:29.710667  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:33:29.735336  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:33:29.735373  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:33:29.784696  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:33:29.784736  283957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:33:29.855640  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.877050  283957 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:33:29.877099  283957 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:33:29.904145  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:33:29.904181  283957 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:33:29.911988  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:33:29.912025  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:33:29.945936  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:33:29.945983  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:33:29.979809  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:29.979844  283957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:33:30.081879  283957 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:33:30.081924  283957 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:33:30.094433  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:33:30.094470  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:33:30.121606  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:33:30.121648  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:33:30.147465  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.147495  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:33:30.194891  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:30.335224  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:33:30.335253  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:33:30.357845  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.370585  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:33:30.370614  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:33:30.380434  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:33:30.380480  283957 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:33:30.470604  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:33:30.470632  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:33:30.474717  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:33:30.474743  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:33:30.480308  283957 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.480332  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:33:30.551614  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:33:30.551645  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:33:30.555526  283957 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:33:30.555551  283957 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:33:30.572488  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.626735  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:33:30.626772  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:33:30.659143  283957 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.659168  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:33:30.708306  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:33:30.708339  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:33:30.751947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.779486  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:33:30.779512  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:33:30.883168  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:30.883208  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:33:31.034072  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:32.347271  283957 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.557782579s)
	I0831 22:33:32.347302  283957 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:33:32.348296  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.821995008s)
	I0831 22:33:33.626333  283957 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-926553" context rescaled to 1 replicas
	I0831 22:33:34.257701  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.726689759s)
	I0831 22:33:34.257836  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.72343282s)
	I0831 22:33:35.750704  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.136952511s)
	I0831 22:33:35.750778  283957 addons.go:475] Verifying addon ingress=true in "addons-926553"
	I0831 22:33:35.750934  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.120293953s)
	I0831 22:33:35.751173  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.093604816s)
	I0831 22:33:35.751232  283957 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.087374745s)
	I0831 22:33:35.751352  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.085238138s)
	I0831 22:33:35.751534  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.89586536s)
	I0831 22:33:35.751557  283957 addons.go:475] Verifying addon registry=true in "addons-926553"
	I0831 22:33:35.752026  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.557102507s)
	I0831 22:33:35.752048  283957 addons.go:475] Verifying addon metrics-server=true in "addons-926553"
	I0831 22:33:35.752087  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.394213157s)
	I0831 22:33:35.752485  283957 node_ready.go:35] waiting up to 6m0s for node "addons-926553" to be "Ready" ...
	I0831 22:33:35.753392  283957 out.go:177] * Verifying ingress addon...
	I0831 22:33:35.753389  283957 out.go:177] * Verifying registry addon...
	I0831 22:33:35.755348  283957 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-926553 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:33:35.757767  283957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:33:35.757777  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:33:35.790994  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:33:35.791083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:35.805166  283957 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:33:35.805241  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:35.829747  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.25721674s)
	W0831 22:33:35.830055  283957 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.830104  283957 retry.go:31] will retry after 224.217796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.829894  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.077890025s)
	W0831 22:33:35.831762  283957 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0831 22:33:36.055322  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:36.093372  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.059244383s)
	I0831 22:33:36.093453  283957 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:36.096487  283957 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:33:36.100111  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:33:36.115482  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:33:36.115552  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.263976  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.265062  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:36.604587  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.787244  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.788063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.104478  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.265822  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.267285  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.604559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.756432  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:37.765094  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.766439  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.119367  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.262590  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.263797  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.604697  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.763609  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.764734  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.104910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.268592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.269044  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.283217  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.227800168s)
	I0831 22:33:39.608539  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.699330  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:33:39.699446  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.716174  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.763187  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.763930  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.822207  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:33:39.846744  283957 addons.go:234] Setting addon gcp-auth=true in "addons-926553"
	I0831 22:33:39.846795  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:39.847250  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:39.875523  283957 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:33:39.875573  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.898490  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.991759  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:39.994410  283957 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:33:39.996970  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:33:39.996996  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:33:40.029853  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:33:40.029886  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:33:40.054335  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.054357  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:33:40.077923  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.110092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.256734  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:40.262779  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.263788  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:40.618485  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.635469  283957 addons.go:475] Verifying addon gcp-auth=true in "addons-926553"
	I0831 22:33:40.638250  283957 out.go:177] * Verifying gcp-auth addon...
	I0831 22:33:40.641904  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:33:40.717924  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:33:40.717949  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:40.760808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.761737  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.103631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.147102  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.261846  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.262577  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.605179  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.645765  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.762543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.764051  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.105362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.148283  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.258237  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:42.263214  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:42.264306  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.604818  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.646007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.762250  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.762606  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.103968  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.145529  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.261669  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.262507  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:43.603902  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.645804  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.762089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.762820  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.104008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.145229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.261278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.262098  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.604072  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.645225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.755790  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:44.762238  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.763675  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.119481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.151439  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.262923  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.263585  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.603923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.645062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.762245  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.763179  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.103665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.145991  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.262108  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.262871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.603987  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.645848  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.755967  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:46.762356  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.763040  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.103999  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.145133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.265067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.265999  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.604241  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.645521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.761239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.762226  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.104502  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.261871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.262973  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.604572  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.762598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.763120  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.104271  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.145932  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.256720  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:49.262226  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:49.263641  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.604683  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.645947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.761803  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.762015  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.103842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.145422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.261604  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.262384  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.604492  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.645631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.762236  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.762361  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.104382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.145709  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.261382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.262159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:51.604037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.645599  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.756631  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:51.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.762943  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.103840  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.146303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.260993  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.262050  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.604518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.645695  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.762149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.762978  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.104308  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.145453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.262149  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.262946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.604459  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.762137  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.762727  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.104542  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.145161  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.255923  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:54.262062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.263015  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.603912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.645424  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.763411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.104967  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.145553  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.262546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.262785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.604748  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.645583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.761826  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.763402  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.105089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.146463  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.256974  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:56.263076  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.263723  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.606473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.647735  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.764781  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.766164  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.104318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.146098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.269923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.271223  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.604825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.645919  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.763180  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.763592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.104174  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.145739  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.261942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.262811  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.603886  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.645351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.757020  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:58.761675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.763460  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.104110  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.145526  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.262377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.262612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.604341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.645727  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.762980  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.136282  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.175701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.297607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.298427  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.605093  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.645870  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.757140  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:00.762169  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.763557  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.104348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.146225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.261098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.262282  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.603884  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.645426  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.762105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.762957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.104192  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.145434  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.262134  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.262894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.603513  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.645138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.762333  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.763186  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.104291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.145545  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.256690  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:03.262509  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.263063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.604219  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.761550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.763199  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.103986  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.145092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.260906  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:04.261910  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.604129  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.645678  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.762705  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.762793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.104713  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.145523  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.262711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.263142  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.603656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.645384  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.756593  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:05.762220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.762442  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.104276  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.145977  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.263109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.264246  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:06.605053  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.645105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.762593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.104549  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.146001  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.262265  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.262528  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:07.603862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.645120  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.762233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.762720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.104365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.145901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.256630  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:08.262630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.263422  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:08.603598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.645197  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.761304  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.762056  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.104651  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.145806  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.262057  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.262888  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.604550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.645470  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.762054  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.763110  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.104284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.257522  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:10.261362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.261939  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.604131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.646061  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.761374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.762267  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.104686  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.145067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.262003  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.262977  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.603815  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.645555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.762188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.762588  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.104640  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.261659  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:12.262461  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.604373  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.645942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.757266  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:12.762383  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.762661  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.103567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.146021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.262280  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.262859  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:13.604082  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.650984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.761311  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.762021  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.104043  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.145580  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.261335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:14.262064  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.603947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.646679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.762765  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.762778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.117766  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.153240  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.267487  283957 node_ready.go:49] node "addons-926553" has status "Ready":"True"
	I0831 22:34:15.267564  283957 node_ready.go:38] duration metric: took 39.514789095s for node "addons-926553" to be "Ready" ...
	I0831 22:34:15.267592  283957 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:34:15.275732  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:34:15.275809  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.276442  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.280854  283957 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:15.629987  283957 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:34:15.630065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.668193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.787659  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.789778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.105852  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.145825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.278884  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.280021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.605440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.645318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.762858  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.765023  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.787617  283957 pod_ready.go:93] pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.787642  283957 pod_ready.go:82] duration metric: took 1.506753163s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.787677  283957 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794111  283957 pod_ready.go:93] pod "etcd-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.794139  283957 pod_ready.go:82] duration metric: took 6.444642ms for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794155  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799542  283957 pod_ready.go:93] pod "kube-apiserver-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.799569  283957 pod_ready.go:82] duration metric: took 5.386535ms for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799580  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806679  283957 pod_ready.go:93] pod "kube-controller-manager-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.806707  283957 pod_ready.go:82] duration metric: took 7.118805ms for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806721  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857188  283957 pod_ready.go:93] pod "kube-proxy-2x2mt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.857218  283957 pod_ready.go:82] duration metric: took 50.489915ms for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857230  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.105581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.146191  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.258600  283957 pod_ready.go:93] pod "kube-scheduler-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:17.258669  283957 pod_ready.go:82] duration metric: took 401.429253ms for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.258694  283957 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.261667  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:17.262687  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.604862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.646272  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.764936  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.765793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.107931  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.207202  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.302173  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.302637  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.606559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.646357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.775904  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.780122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.110151  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.146402  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.272716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.275660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.278834  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:19.607312  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.646989  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.764462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.765340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.108138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.158436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.265037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.265857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.607204  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.649365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.766184  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.766778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.117188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.264649  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.267229  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.605997  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.646189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.769252  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.776432  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.779045  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:22.105797  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.205291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.270938  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.272159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:22.606720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.645319  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.768212  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.769045  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.105481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.146025  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.264716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.266628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.604946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.645376  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.766158  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.767067  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.797542  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:24.105732  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.147335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.266279  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.267261  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:24.606800  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.646677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.766259  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.767462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.106518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.205453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.314730  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.316362  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.607028  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.650341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.770511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.773834  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.104894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.145895  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.263752  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.265016  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:26.267354  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:26.605178  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.645897  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.767644  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.768292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.105737  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.145850  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.264918  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.265889  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:27.605106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.645943  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.764477  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.766607  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.107629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.207239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.263084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.264194  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.605775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.646375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.762388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.764546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.767472  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:29.106278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.146524  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.265912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.268490  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:29.605745  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.646867  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.765756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.772314  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.122548  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.148292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.279047  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.280259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.604607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.645863  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.765718  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.766955  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.770653  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:31.107084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.145821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.265330  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.266346  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:31.606351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.646041  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.762658  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.765467  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.105934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.145601  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.264777  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.266337  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.605229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.646223  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.774989  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:32.785303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.785784  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.105083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.146512  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.263890  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.265992  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.606498  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.645662  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.763811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.764819  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.105423  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.145701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.266956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.269628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.605901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.645149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.763985  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.765038  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.112775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.147243  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.271686  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.273029  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.277897  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:35.605975  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.645757  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.764098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.764377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.106052  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.146574  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.266738  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.269371  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.605156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.766567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.767069  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.105482  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.145408  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.262842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.264940  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.605630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.645579  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.763903  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.764638  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.768030  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:38.105602  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.145844  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.279984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:38.281288  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.606189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.645328  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.766976  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.768517  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.107588  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.145837  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.267811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.269043  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.604990  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.645894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.764577  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.765987  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.783324  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:40.110946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.149038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.263916  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:40.264452  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.605702  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.646035  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.762583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.765830  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.104722  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.146251  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.267893  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.270170  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.605079  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.766428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.767660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.108325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.152284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.277162  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:42.278233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.280340  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:42.605427  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.645085  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.764212  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.764388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.105237  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.145656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.264399  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:43.265176  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.605756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.646160  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.767679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.777857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.106039  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.146446  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.299193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.309733  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.326060  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:44.605473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.645672  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.763034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.764053  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.111264  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.159920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.269565  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.270011  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.605305  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.646239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.778410  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.779825  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.104643  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.146156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.264631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.267013  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.622647  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.646343  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.764083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.765335  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.769473  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:47.105381  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.145795  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.263471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.265096  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:47.605821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.646133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.763675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.765088  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.105731  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.146388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.277910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.279115  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.607534  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.646422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.771915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.773860  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.783304  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:49.105357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.265098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.266325  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:49.606355  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.645828  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.775820  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.779206  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.107042  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.146396  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.265357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.268892  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.606663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.649461  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.766106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.768357  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.106471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.145827  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.263868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.273856  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.276035  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:51.605984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.646501  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.770956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.775016  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.105268  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.145877  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.263405  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.606281  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.646369  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.774325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.775093  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.106374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.146473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.267665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:53.269478  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.276369  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:53.607786  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.705941  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.808463  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.808930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.106742  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.146131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.262778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.263743  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:54.605780  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.645489  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.763543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.764691  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.105073  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.146671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.263581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.264593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.604808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.645627  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.765957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.767629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.774463  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:56.106436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.147428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.274490  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.276298  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:56.606475  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.663836  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.768576  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.770804  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.105671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.146711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.264259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.270150  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.607038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.645905  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.766741  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.769544  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.777959  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:58.105648  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.146227  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.265054  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:58.265762  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.605480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.646483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.766211  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.768130  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.105789  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.146597  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.265677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.269145  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.605347  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.645340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.765278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.767138  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.159210  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.164293  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.328550  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.329744  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.335942  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:00.606480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.647703  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.763533  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.765948  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.106323  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.146291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.264390  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.265200  283957 kapi.go:107] duration metric: took 1m25.507422226s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:35:01.612483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.646438  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.767506  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.106814  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.206008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.262315  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.606382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.645915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.764109  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.766427  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:03.105521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.145663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.262337  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:03.605065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.763085  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.105575  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.146506  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.265127  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.622274  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.650220  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.763587  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.771154  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:05.107755  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.146930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.263894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:05.605375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.645868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.764781  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.105494  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.146233  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.262353  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.609706  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.646514  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.766654  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.105395  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.147002  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.265286  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.269347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:07.605980  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.645479  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.766524  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.105796  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.146353  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.280220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.606605  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.645535  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.764454  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.105835  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.145440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.262310  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.605511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.646558  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.765787  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.767713  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:10.107122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.146046  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.271694  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:10.606278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.645926  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.767543  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.106465  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.150614  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.263411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.610421  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.653984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.768938  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.105749  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.205140  283957 kapi.go:107] duration metric: took 1m31.563232697s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:35:12.208102  283957 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-926553 cluster.
	I0831 22:35:12.210660  283957 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:35:12.213274  283957 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:35:12.264022  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.265955  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:12.604934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.763295  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.105032  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.262133  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.606171  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.764828  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.106701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.261801  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.604865  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.765083  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:14.771193  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.110555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.271540  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.605431  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.766094  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.110167  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.267927  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.606034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.764905  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.766036  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:17.105448  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:17.604881  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.764247  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.107297  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.263113  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.607207  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.763761  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.767424  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:19.105348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.265466  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:19.606177  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.772514  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.107301  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.265082  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.606295  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.762817  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.769525  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:21.106007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:21.262749  283957 kapi.go:107] duration metric: took 1m45.504982271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:35:21.610332  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.123132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.606681  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.106303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.265347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:23.610785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.108937  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.604883  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.106603  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.266133  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:25.605612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.786474  283957 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.786506  283957 pod_ready.go:82] duration metric: took 1m8.527790413s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.786520  283957 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795290  283957 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.795318  283957 pod_ready.go:82] duration metric: took 8.78951ms for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795341  283957 pod_ready.go:39] duration metric: took 1m10.52768296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:35:25.795356  283957 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:35:25.795434  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:25.795702  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:25.886248  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:25.886322  283957 cri.go:89] found id: ""
	I0831 22:35:25.886358  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:25.886451  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.890246  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:25.890401  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:25.961145  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:25.961169  283957 cri.go:89] found id: ""
	I0831 22:35:25.961177  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:25.961232  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.971647  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:25.971720  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:26.081420  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.081442  283957 cri.go:89] found id: ""
	I0831 22:35:26.081450  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:26.081509  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.086692  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:26.086769  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:26.106149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.187973  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:26.187996  283957 cri.go:89] found id: ""
	I0831 22:35:26.188004  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:26.188061  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.192877  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:26.192951  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:26.297630  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.297653  283957 cri.go:89] found id: ""
	I0831 22:35:26.297662  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:26.297719  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.305863  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:26.305932  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:26.386494  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.386518  283957 cri.go:89] found id: ""
	I0831 22:35:26.386526  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:26.386596  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.391560  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:26.391632  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:26.446888  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.446911  283957 cri.go:89] found id: ""
	I0831 22:35:26.446919  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:26.446974  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.452924  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:26.452953  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.520818  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:26.520850  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.579607  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:26.579638  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.605871  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.676077  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:26.676186  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.772215  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:26.772299  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:26.885704  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:26.885743  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:26.971800  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972187  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972448  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.972661  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972903  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.973166  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.026028  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:27.026122  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:27.121170  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:27.306579  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:27.306611  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:27.381339  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:27.381381  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:27.432923  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:27.432958  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:27.505422  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:27.505456  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:27.523608  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:27.523691  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:27.594979  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595049  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:27.595118  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:27.595127  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595134  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595140  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595148  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595158  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.595169  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595175  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:27.606018  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.107291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.607326  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.107920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.605540  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:30.116852  283957 kapi.go:107] duration metric: took 1m54.016739242s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:35:30.119299  283957 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0831 22:35:30.123306  283957 addons.go:510] duration metric: took 2m1.799821522s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0831 22:35:37.595431  283957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:37.609347  283957 api_server.go:72] duration metric: took 2m9.286263895s to wait for apiserver process to appear ...
	I0831 22:35:37.609372  283957 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:35:37.609409  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:37.609464  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:37.653375  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:37.653399  283957 cri.go:89] found id: ""
	I0831 22:35:37.653408  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:37.653466  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.657014  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:37.657091  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:37.702049  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:37.702081  283957 cri.go:89] found id: ""
	I0831 22:35:37.702090  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:37.702148  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.705948  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:37.706022  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:37.743979  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.744002  283957 cri.go:89] found id: ""
	I0831 22:35:37.744010  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:37.744067  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.748167  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:37.748235  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:37.787366  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:37.787387  283957 cri.go:89] found id: ""
	I0831 22:35:37.787394  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:37.787456  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.791268  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:37.791418  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:37.839012  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:37.839032  283957 cri.go:89] found id: ""
	I0831 22:35:37.839040  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:37.839095  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.842773  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:37.842857  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:37.882906  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:37.882928  283957 cri.go:89] found id: ""
	I0831 22:35:37.882936  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:37.883016  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.886592  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:37.886701  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:37.929003  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:37.929026  283957 cri.go:89] found id: ""
	I0831 22:35:37.929034  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:37.929089  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.932647  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:37.932675  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.976634  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:37.976663  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:38.029768  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:38.029845  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:38.089134  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:38.089209  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:38.133397  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:38.133434  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:38.191973  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:38.192003  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:38.254593  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.254790  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255021  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255206  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255426  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255652  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.293315  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:38.293348  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:38.309324  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:38.309354  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:38.449465  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:38.449541  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:38.557894  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:38.557935  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:38.613020  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:38.613053  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:38.667543  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:38.667580  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:38.774202  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774279  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:38.774360  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:38.774399  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774433  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774476  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774510  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774544  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.774579  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774586  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:48.775832  283957 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:35:48.783566  283957 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:35:48.786206  283957 api_server.go:141] control plane version: v1.31.0
	I0831 22:35:48.786241  283957 api_server.go:131] duration metric: took 11.176861075s to wait for apiserver health ...
	I0831 22:35:48.786251  283957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:35:48.786273  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:48.786338  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:48.824896  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:48.824918  283957 cri.go:89] found id: ""
	I0831 22:35:48.824927  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:48.824984  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.828359  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:48.828472  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:48.869702  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:48.869727  283957 cri.go:89] found id: ""
	I0831 22:35:48.869735  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:48.869811  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.873344  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:48.873422  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:48.912098  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:48.912121  283957 cri.go:89] found id: ""
	I0831 22:35:48.912129  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:48.912185  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.915599  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:48.915669  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:48.958620  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:48.958644  283957 cri.go:89] found id: ""
	I0831 22:35:48.958653  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:48.958744  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.962169  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:48.962244  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:49.006023  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.006048  283957 cri.go:89] found id: ""
	I0831 22:35:49.006056  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:49.006118  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.011545  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:49.011654  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:49.054445  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.054469  283957 cri.go:89] found id: ""
	I0831 22:35:49.054478  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:49.054566  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.058214  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:49.058292  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:49.096178  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.096203  283957 cri.go:89] found id: ""
	I0831 22:35:49.096211  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:49.096265  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.099723  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:49.099762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.139017  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:49.139048  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.212561  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:49.212599  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.257845  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:49.257877  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:49.305619  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:49.305649  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:49.445076  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:49.445108  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:49.511728  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:49.511762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:49.559678  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:49.559715  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:49.600032  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:49.600066  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:49.699340  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:49.699382  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:49.762989  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763218  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763449  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.763640  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763860  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.764086  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.804313  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:49.804351  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:49.820979  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:49.821065  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:49.873854  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.873890  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:49.873974  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:49.873986  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874019  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874034  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874045  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874060  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.874067  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.874074  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:59.888880  283957 system_pods.go:59] 18 kube-system pods found
	I0831 22:35:59.888960  283957 system_pods.go:61] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.888985  283957 system_pods.go:61] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.889010  283957 system_pods.go:61] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.889033  283957 system_pods.go:61] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.889053  283957 system_pods.go:61] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.889074  283957 system_pods.go:61] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.889093  283957 system_pods.go:61] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.889115  283957 system_pods.go:61] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.889134  283957 system_pods.go:61] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.889154  283957 system_pods.go:61] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.889175  283957 system_pods.go:61] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.889195  283957 system_pods.go:61] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.889218  283957 system_pods.go:61] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.889238  283957 system_pods.go:61] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.889260  283957 system_pods.go:61] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.889280  283957 system_pods.go:61] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.889300  283957 system_pods.go:61] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.889321  283957 system_pods.go:61] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.889343  283957 system_pods.go:74] duration metric: took 11.103084876s to wait for pod list to return data ...
	I0831 22:35:59.889364  283957 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:35:59.892759  283957 default_sa.go:45] found service account: "default"
	I0831 22:35:59.892790  283957 default_sa.go:55] duration metric: took 3.404577ms for default service account to be created ...
	I0831 22:35:59.892801  283957 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:35:59.903086  283957 system_pods.go:86] 18 kube-system pods found
	I0831 22:35:59.903124  283957 system_pods.go:89] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.903134  283957 system_pods.go:89] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.903139  283957 system_pods.go:89] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.903143  283957 system_pods.go:89] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.903148  283957 system_pods.go:89] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.903152  283957 system_pods.go:89] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.903157  283957 system_pods.go:89] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.903162  283957 system_pods.go:89] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.903168  283957 system_pods.go:89] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.903173  283957 system_pods.go:89] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.903178  283957 system_pods.go:89] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.903182  283957 system_pods.go:89] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.903191  283957 system_pods.go:89] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.903195  283957 system_pods.go:89] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.903199  283957 system_pods.go:89] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.903208  283957 system_pods.go:89] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.903212  283957 system_pods.go:89] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.903225  283957 system_pods.go:89] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.903232  283957 system_pods.go:126] duration metric: took 10.425939ms to wait for k8s-apps to be running ...
	I0831 22:35:59.903240  283957 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:35:59.903305  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:59.914900  283957 system_svc.go:56] duration metric: took 11.64979ms WaitForService to wait for kubelet
	I0831 22:35:59.914930  283957 kubeadm.go:582] duration metric: took 2m31.591852103s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:35:59.914951  283957 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:35:59.918337  283957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 22:35:59.918373  283957 node_conditions.go:123] node cpu capacity is 2
	I0831 22:35:59.918383  283957 node_conditions.go:105] duration metric: took 3.427642ms to run NodePressure ...
	I0831 22:35:59.918397  283957 start.go:241] waiting for startup goroutines ...
	I0831 22:35:59.918404  283957 start.go:246] waiting for cluster config update ...
	I0831 22:35:59.918419  283957 start.go:255] writing updated cluster config ...
	I0831 22:35:59.918717  283957 ssh_runner.go:195] Run: rm -f paused
	I0831 22:36:00.538015  283957 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:36:00.544227  283957 out.go:177] * Done! kubectl is now configured to use "addons-926553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:45:12 addons-926553 crio[969]: time="2024-08-31 22:45:12.822685879Z" level=info msg="Stopped pod sandbox: 7f7d7cecf732629b6dbd6b2a5039a010ef61f44c11a57190f0cd26d378246cf5" id=961ccce3-e42c-4ba5-bf44-a7e8a66257a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:14 addons-926553 crio[969]: time="2024-08-31 22:45:14.797461178Z" level=info msg="Stopping pod sandbox: cc43845d42aa66aa5e9584dac534d867a6c999d3334c18c18e94bd7e586e5126" id=7fe01321-2725-4585-a767-3e4a08561dcf name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:14 addons-926553 crio[969]: time="2024-08-31 22:45:14.797729565Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:cc43845d42aa66aa5e9584dac534d867a6c999d3334c18c18e94bd7e586e5126 UID:e851bb87-ad25-4276-910b-2eb567439f7a NetNS:/var/run/netns/b16da6d8-4760-4024-96c1-93d1d017746e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 31 22:45:14 addons-926553 crio[969]: time="2024-08-31 22:45:14.797885970Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Aug 31 22:45:14 addons-926553 crio[969]: time="2024-08-31 22:45:14.827682662Z" level=info msg="Stopped pod sandbox: cc43845d42aa66aa5e9584dac534d867a6c999d3334c18c18e94bd7e586e5126" id=7fe01321-2725-4585-a767-3e4a08561dcf name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.631066569Z" level=info msg="Stopping container: 253c8791474bd3529943d7e5e23f781a14fe0376240be8054563cbc06fdd132a (timeout: 30s)" id=e5bde2df-d238-4aca-a951-ac48e4904408 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.693990198Z" level=info msg="Stopping container: 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea (timeout: 30s)" id=951f76e0-82c7-43a2-a2a8-5e7c6275a69c name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:45:15 addons-926553 conmon[3436]: conmon 253c8791474bd3529943 <ninfo>: container 3447 exited with status 2
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.800373569Z" level=info msg="Stopped container 253c8791474bd3529943d7e5e23f781a14fe0376240be8054563cbc06fdd132a: kube-system/registry-6fb4cdfc84-bf4pl/registry" id=e5bde2df-d238-4aca-a951-ac48e4904408 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.801669984Z" level=info msg="Stopping pod sandbox: 79a922eead90c0e3c70eaa57d4c3110a2ace13d4bddc7764de0605fdc99e783f" id=9923329b-89d8-426c-ab01-96ad6c792479 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.802059774Z" level=info msg="Got pod network &{Name:registry-6fb4cdfc84-bf4pl Namespace:kube-system ID:79a922eead90c0e3c70eaa57d4c3110a2ace13d4bddc7764de0605fdc99e783f UID:000dc781-4a18-4524-b73a-681e34eaa529 NetNS:/var/run/netns/7504e92a-39db-42a2-8f00-ae2ebc067a72 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.802209246Z" level=info msg="Deleting pod kube-system_registry-6fb4cdfc84-bf4pl from CNI network \"kindnet\" (type=ptp)"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.839449896Z" level=info msg="Stopped pod sandbox: 79a922eead90c0e3c70eaa57d4c3110a2ace13d4bddc7764de0605fdc99e783f" id=9923329b-89d8-426c-ab01-96ad6c792479 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.870427961Z" level=info msg="Stopped container 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea: kube-system/registry-proxy-6dfvf/registry-proxy" id=951f76e0-82c7-43a2-a2a8-5e7c6275a69c name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.872281057Z" level=info msg="Stopping pod sandbox: 9b324909614e41858fce798675764c5233e9e10b9df3f04918224183384fd14a" id=7899ef71-b431-48ee-8548-6bb68a5c45d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.878962364Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-SNLGGGJJ4PTX4CWQ - [0:0]\n:KUBE-HP-ZD7CWHOQU7LQSU4P - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-23XDI3UQKEK5ACU3 - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b86-e9a9804b4efa_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-ZD7CWHOQU7LQSU4P\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b86-e9a9804b4efa_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-23XDI3UQKEK5ACU3\n-A KUBE-HP-23XDI3UQKEK5ACU3 -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b86-e9a9804b4efa_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-23XDI3UQKEK5ACU3 -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b8
6-e9a9804b4efa_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-A KUBE-HP-ZD7CWHOQU7LQSU4P -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b86-e9a9804b4efa_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-ZD7CWHOQU7LQSU4P -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-xrqz9_ingress-nginx_30ac112c-2cb9-44df-8b86-e9a9804b4efa_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-X KUBE-HP-SNLGGGJJ4PTX4CWQ\nCOMMIT\n"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.885980447Z" level=info msg="Closing host port tcp:5000"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.892694566Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.892908045Z" level=info msg="Got pod network &{Name:registry-proxy-6dfvf Namespace:kube-system ID:9b324909614e41858fce798675764c5233e9e10b9df3f04918224183384fd14a UID:f354b100-f3b2-4369-b6de-637de12a35fb NetNS:/var/run/netns/3af0374a-c99e-4c33-95c2-d907e0bc85bc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.893051437Z" level=info msg="Deleting pod kube-system_registry-proxy-6dfvf from CNI network \"kindnet\" (type=ptp)"
	Aug 31 22:45:15 addons-926553 crio[969]: time="2024-08-31 22:45:15.977341354Z" level=info msg="Stopped pod sandbox: 9b324909614e41858fce798675764c5233e9e10b9df3f04918224183384fd14a" id=7899ef71-b431-48ee-8548-6bb68a5c45d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:45:16 addons-926553 crio[969]: time="2024-08-31 22:45:16.817554820Z" level=info msg="Removing container: 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea" id=5b4a304d-0aba-415b-af71-bd1011414dca name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 31 22:45:16 addons-926553 crio[969]: time="2024-08-31 22:45:16.853560609Z" level=info msg="Removed container 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea: kube-system/registry-proxy-6dfvf/registry-proxy" id=5b4a304d-0aba-415b-af71-bd1011414dca name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 31 22:45:16 addons-926553 crio[969]: time="2024-08-31 22:45:16.862534418Z" level=info msg="Removing container: 253c8791474bd3529943d7e5e23f781a14fe0376240be8054563cbc06fdd132a" id=d78a7374-3cb9-4bc8-acd3-a0c1332559d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 31 22:45:16 addons-926553 crio[969]: time="2024-08-31 22:45:16.899831552Z" level=info msg="Removed container 253c8791474bd3529943d7e5e23f781a14fe0376240be8054563cbc06fdd132a: kube-system/registry-6fb4cdfc84-bf4pl/registry" id=d78a7374-3cb9-4bc8-acd3-a0c1332559d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c9ebd69621b7e       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                             6 seconds ago       Exited              helper-pod                0                   7f7d7cecf7326       helper-pod-delete-pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0
	e8bd7d3c2a513       docker.io/library/busybox@sha256:82742949a3709938cbeb9cec79f5eaf3e48b255389f2dcedf2de29ef96fd841c                            8 seconds ago       Exited              busybox                   0                   7510c95cf16a4       test-local-path
	698105df216b2       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                            12 seconds ago      Exited              helper-pod                0                   1036caecdde1d       helper-pod-create-pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0
	02a86c6253aa4       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            5 minutes ago       Exited              gadget                    6                   6f15601ca90fa       gadget-nzlbh
	e9081eefaa134       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             9 minutes ago       Running             controller                0                   97f5fa31394eb       ingress-nginx-controller-bc57996ff-xrqz9
	5102df2042c27       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                  0                   c6ce5424649e0       gcp-auth-89d5ffd79-ntcjg
	505b1ff847477       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               10 minutes ago      Running             cloud-spanner-emulator    0                   44bfd96babd05       cloud-spanner-emulator-769b77f747-nzrb6
	08c755cbd5fe4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner    0                   57dab6b5f6051       local-path-provisioner-86d989889c-5d9bc
	e279b35e3726f       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             10 minutes ago      Exited              patch                     2                   44fcf4b002cf9       ingress-nginx-admission-patch-qsgmg
	bcf5108769347       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                    0                   7aab5d7eec9ef       ingress-nginx-admission-create-pxdjc
	eb94fa29e1d5a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             10 minutes ago      Running             minikube-ingress-dns      0                   56f512c9c7623       kube-ingress-dns-minikube
	1512f4dc6befd       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server            0                   9ffbb41ccd3eb       metrics-server-84c5f94fbc-zwvsl
	d4a4a18a5a7f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner       0                   37a8c2f557cde       storage-provisioner
	c0854dd1abcf9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             11 minutes ago      Running             coredns                   0                   c565a0f2f52b8       coredns-6f6b679f8f-sljbt
	7cc064acda755       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           11 minutes ago      Running             kindnet-cni               0                   ba7fb4cc6f892       kindnet-wdlp4
	38638055bfba9       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             11 minutes ago      Running             kube-proxy                0                   2faf839d32f54       kube-proxy-2x2mt
	cc59354075cb7       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             12 minutes ago      Running             kube-controller-manager   0                   9d98609f879af       kube-controller-manager-addons-926553
	a2ceaab8a5e1b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                      0                   003527351e2b0       etcd-addons-926553
	29388d95df021       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             12 minutes ago      Running             kube-scheduler            0                   58f6b662812e6       kube-scheduler-addons-926553
	4f3de6a88ca04       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             12 minutes ago      Running             kube-apiserver            0                   fec228035ae32       kube-apiserver-addons-926553
	
	
	==> coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] <==
	[INFO] 10.244.0.14:47403 - 18828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107133s
	[INFO] 10.244.0.14:60100 - 56608 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008414849s
	[INFO] 10.244.0.14:60100 - 41517 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009713783s
	[INFO] 10.244.0.14:38062 - 19984 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167759s
	[INFO] 10.244.0.14:38062 - 61468 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152022s
	[INFO] 10.244.0.14:56768 - 49550 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107535s
	[INFO] 10.244.0.14:56768 - 25522 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038949s
	[INFO] 10.244.0.14:36032 - 41173 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087826s
	[INFO] 10.244.0.14:36032 - 21969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059166s
	[INFO] 10.244.0.14:57338 - 29619 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121532s
	[INFO] 10.244.0.14:57338 - 61873 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038046s
	[INFO] 10.244.0.14:56027 - 58740 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002244404s
	[INFO] 10.244.0.14:56027 - 1643 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002177787s
	[INFO] 10.244.0.14:36047 - 49336 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069111s
	[INFO] 10.244.0.14:36047 - 12732 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192186s
	[INFO] 10.244.0.19:60080 - 19976 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207808s
	[INFO] 10.244.0.19:44795 - 23051 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118792s
	[INFO] 10.244.0.19:45334 - 37804 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198151s
	[INFO] 10.244.0.19:49736 - 43423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110488s
	[INFO] 10.244.0.19:60561 - 60650 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127867s
	[INFO] 10.244.0.19:55452 - 41864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097204s
	[INFO] 10.244.0.19:54221 - 39065 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002445188s
	[INFO] 10.244.0.19:53320 - 41026 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00209399s
	[INFO] 10.244.0.19:57162 - 45093 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001822174s
	[INFO] 10.244.0.19:34360 - 14218 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002709595s
	
	
	==> describe nodes <==
	Name:               addons-926553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-926553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-926553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-926553
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:33:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-926553
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:45:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:44:26 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:44:26 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:44:26 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:44:26 +0000   Sat, 31 Aug 2024 22:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-926553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c4652ff78a412da204ff6653859615
	  System UUID:                a9959b90-2ddc-4599-b12a-adb3653f0cc6
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-nzrb6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-nzlbh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-ntcjg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xrqz9    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-sljbt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-926553                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-wdlp4                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-926553                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-926553       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2x2mt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-926553                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-zwvsl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-5d9bc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-926553 event: Registered Node addons-926553 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-926553 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug31 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014722] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471263] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.854339] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621095] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 21:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug31 21:36] hrtimer: interrupt took 85633258 ns
	
	
	==> etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] <==
	{"level":"warn","ts":"2024-08-31T22:33:33.524119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.411585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4109"}
	{"level":"info","ts":"2024-08-31T22:33:33.524263Z","caller":"traceutil/trace.go:171","msg":"trace[1066283787] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:417; }","duration":"278.562189ms","start":"2024-08-31T22:33:33.245688Z","end":"2024-08-31T22:33:33.524250Z","steps":["trace[1066283787] 'agreement among raft nodes before linearized reading'  (duration: 278.328041ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.524725Z","caller":"traceutil/trace.go:171","msg":"trace[1998513680] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"129.020394ms","start":"2024-08-31T22:33:33.395694Z","end":"2024-08-31T22:33:33.524714Z","steps":["trace[1998513680] 'process raft request'  (duration: 107.387194ms)","trace[1998513680] 'compare'  (duration: 20.622685ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:33:33.530997Z","caller":"traceutil/trace.go:171","msg":"trace[234719321] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"135.133009ms","start":"2024-08-31T22:33:33.395850Z","end":"2024-08-31T22:33:33.530983Z","steps":["trace[234719321] 'process raft request'  (duration: 127.947447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.575244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:33:33.566193Z","caller":"traceutil/trace.go:171","msg":"trace[595826488] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:417; }","duration":"306.912441ms","start":"2024-08-31T22:33:33.259250Z","end":"2024-08-31T22:33:33.566162Z","steps":["trace[595826488] 'agreement among raft nodes before linearized reading'  (duration: 272.568401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.573983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259231Z","time spent":"314.696303ms","remote":"127.0.0.1:50728","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T22:33:33.531910Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.516901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3145"}
	{"level":"info","ts":"2024-08-31T22:33:33.577122Z","caller":"traceutil/trace.go:171","msg":"trace[1467882144] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:417; }","duration":"164.71578ms","start":"2024-08-31T22:33:33.412390Z","end":"2024-08-31T22:33:33.577105Z","steps":["trace[1467882144] 'agreement among raft nodes before linearized reading'  (duration: 119.483424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.582255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.333652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.597156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.932034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.532029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.89155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2024-08-31T22:33:33.577617Z","caller":"traceutil/trace.go:171","msg":"trace[133143656] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:417; }","duration":"165.263282ms","start":"2024-08-31T22:33:33.412344Z","end":"2024-08-31T22:33:33.577607Z","steps":["trace[133143656] 'agreement among raft nodes before linearized reading'  (duration: 119.576076ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577692Z","caller":"traceutil/trace.go:171","msg":"trace[701626801] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:417; }","duration":"182.073297ms","start":"2024-08-31T22:33:33.395612Z","end":"2024-08-31T22:33:33.577685Z","steps":["trace[701626801] 'agreement among raft nodes before linearized reading'  (duration: 136.325628ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577710Z","caller":"traceutil/trace.go:171","msg":"trace[617058299] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:417; }","duration":"317.35042ms","start":"2024-08-31T22:33:33.260355Z","end":"2024-08-31T22:33:33.577705Z","steps":["trace[617058299] 'agreement among raft nodes before linearized reading'  (duration: 271.603752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.609326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.260279Z","time spent":"349.011862ms","remote":"127.0.0.1:50662","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas\" limit:1 "}
	{"level":"info","ts":"2024-08-31T22:33:33.577966Z","caller":"traceutil/trace.go:171","msg":"trace[1867680583] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:417; }","duration":"318.901298ms","start":"2024-08-31T22:33:33.259056Z","end":"2024-08-31T22:33:33.577957Z","steps":["trace[1867680583] 'agreement among raft nodes before linearized reading'  (duration: 272.92616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.610229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259019Z","time spent":"351.194926ms","remote":"127.0.0.1:50942","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" "}
	{"level":"info","ts":"2024-08-31T22:33:33.577987Z","caller":"traceutil/trace.go:171","msg":"trace[1222259269] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:417; }","duration":"318.848532ms","start":"2024-08-31T22:33:33.259134Z","end":"2024-08-31T22:33:33.577983Z","steps":["trace[1222259269] 'agreement among raft nodes before linearized reading'  (duration: 272.866747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.614870Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259122Z","time spent":"355.723597ms","remote":"127.0.0.1:51030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":3375,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"info","ts":"2024-08-31T22:43:18.076737Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1540}
	{"level":"info","ts":"2024-08-31T22:43:18.119550Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1540,"took":"42.327277ms","hash":1500695898,"current-db-size-bytes":6250496,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3358720,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-08-31T22:43:18.119615Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1500695898,"revision":1540,"compact-revision":-1}
	
	
	==> gcp-auth [5102df2042c274c3bdda768e34fef45be4cf3338060a3b3ca18b308ef802a5b7] <==
	2024/08/31 22:35:11 GCP Auth Webhook started!
	2024/08/31 22:36:00 Ready to marshal response ...
	2024/08/31 22:36:00 Ready to write response ...
	2024/08/31 22:36:01 Ready to marshal response ...
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:36:01 Ready to marshal response ...
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:44:06 Ready to marshal response ...
	2024/08/31 22:44:06 Ready to write response ...
	2024/08/31 22:44:14 Ready to marshal response ...
	2024/08/31 22:44:14 Ready to write response ...
	2024/08/31 22:44:27 Ready to marshal response ...
	2024/08/31 22:44:27 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:10 Ready to marshal response ...
	2024/08/31 22:45:10 Ready to write response ...
	
	
	==> kernel <==
	 22:45:17 up  2:27,  0 users,  load average: 0.55, 0.65, 1.51
	Linux addons-926553 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] <==
	I0831 22:43:14.655973       1 main.go:299] handling current node
	I0831 22:43:24.650791       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:43:24.650842       1 main.go:299] handling current node
	I0831 22:43:34.649562       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:43:34.649598       1 main.go:299] handling current node
	I0831 22:43:44.654735       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:43:44.654776       1 main.go:299] handling current node
	I0831 22:43:54.652746       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:43:54.652784       1 main.go:299] handling current node
	I0831 22:44:04.658751       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:04.658870       1 main.go:299] handling current node
	I0831 22:44:14.649563       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:14.649606       1 main.go:299] handling current node
	I0831 22:44:24.649562       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:24.649595       1 main.go:299] handling current node
	I0831 22:44:34.649613       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:34.649648       1 main.go:299] handling current node
	I0831 22:44:44.649611       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:44.649649       1 main.go:299] handling current node
	I0831 22:44:54.649578       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:44:54.649612       1 main.go:299] handling current node
	I0831 22:45:04.649606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:45:04.649745       1 main.go:299] handling current node
	I0831 22:45:14.650545       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:45:14.650583       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0831 22:34:35.983804       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0831 22:34:35.983835       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0831 22:35:25.356102       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.19.220:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.19.220:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.19.220:443: connect: connection refused" logger="UnhandledError"
	W0831 22:35:25.357005       1 handler_proxy.go:99] no RequestInfo found in the context
	E0831 22:35:25.357143       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0831 22:35:25.405475       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0831 22:44:19.100699       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:44:43.651018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.651160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.684644       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.684781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.702517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.702581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.707833       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.707886       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.743226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.743276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:44:44.708480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:44:44.744290       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0831 22:44:44.836280       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] <==
	E0831 22:44:48.483590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:44:48.630510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:44:48.630557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:44:50.637687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.816µs"
	W0831 22:44:51.584110       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:44:51.584153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:44:54.445545       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:44:54.445592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:44:54.501007       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:44:54.501050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:44:57.688284       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0831 22:44:57.688328       1 shared_informer.go:320] Caches are synced for resource quota
	I0831 22:44:58.150184       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0831 22:44:58.150234       1 shared_informer.go:320] Caches are synced for garbage collector
	W0831 22:44:58.963778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:44:58.963836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:45:00.763712       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0831 22:45:06.305173       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:45:06.305336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:45:06.444105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:45:06.444163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:45:15.591243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="8.337µs"
	W0831 22:45:16.469884       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:45:16.469928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:45:17.429147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="7.36µs"
	
	
	==> kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] <==
	I0831 22:33:33.909772       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:33:34.876166       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:33:34.876653       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:33:35.043499       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:33:35.050030       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:33:35.104068       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:33:35.104588       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:33:35.104890       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:33:35.106274       1 config.go:197] "Starting service config controller"
	I0831 22:33:35.106395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:33:35.106464       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:33:35.106494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:33:35.107280       1 config.go:326] "Starting node config controller"
	I0831 22:33:35.107354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:33:35.222348       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:33:35.222470       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:33:35.222534       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] <==
	W0831 22:33:20.578962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:20.578977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:33:20.579037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:20.579097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:33:20.579211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:33:20.579296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:20.579353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.584824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:33:20.584869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.398071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:21.398208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.413716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.413827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.497136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:21.497258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.589583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.589719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.860482       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:33:21.860528       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0831 22:33:24.865187       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:45:14 addons-926553 kubelet[1497]: I0831 22:45:14.897304    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e851bb87-ad25-4276-910b-2eb567439f7a-kube-api-access-t82z9" (OuterVolumeSpecName: "kube-api-access-t82z9") pod "e851bb87-ad25-4276-910b-2eb567439f7a" (UID: "e851bb87-ad25-4276-910b-2eb567439f7a"). InnerVolumeSpecName "kube-api-access-t82z9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:45:14 addons-926553 kubelet[1497]: I0831 22:45:14.996068    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t82z9\" (UniqueName: \"kubernetes.io/projected/e851bb87-ad25-4276-910b-2eb567439f7a-kube-api-access-t82z9\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:45:14 addons-926553 kubelet[1497]: I0831 22:45:14.996109    1497 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e851bb87-ad25-4276-910b-2eb567439f7a-gcp-creds\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:45:15 addons-926553 kubelet[1497]: I0831 22:45:15.062909    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b19c45-0473-4fc2-b088-a60502dca385" path="/var/lib/kubelet/pods/09b19c45-0473-4fc2-b088-a60502dca385/volumes"
	Aug 31 22:45:15 addons-926553 kubelet[1497]: I0831 22:45:15.903110    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh89f\" (UniqueName: \"kubernetes.io/projected/000dc781-4a18-4524-b73a-681e34eaa529-kube-api-access-zh89f\") pod \"000dc781-4a18-4524-b73a-681e34eaa529\" (UID: \"000dc781-4a18-4524-b73a-681e34eaa529\") "
	Aug 31 22:45:15 addons-926553 kubelet[1497]: I0831 22:45:15.920924    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000dc781-4a18-4524-b73a-681e34eaa529-kube-api-access-zh89f" (OuterVolumeSpecName: "kube-api-access-zh89f") pod "000dc781-4a18-4524-b73a-681e34eaa529" (UID: "000dc781-4a18-4524-b73a-681e34eaa529"). InnerVolumeSpecName "kube-api-access-zh89f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.004360    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfbs6\" (UniqueName: \"kubernetes.io/projected/f354b100-f3b2-4369-b6de-637de12a35fb-kube-api-access-wfbs6\") pod \"f354b100-f3b2-4369-b6de-637de12a35fb\" (UID: \"f354b100-f3b2-4369-b6de-637de12a35fb\") "
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.004535    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zh89f\" (UniqueName: \"kubernetes.io/projected/000dc781-4a18-4524-b73a-681e34eaa529-kube-api-access-zh89f\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.020881    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f354b100-f3b2-4369-b6de-637de12a35fb-kube-api-access-wfbs6" (OuterVolumeSpecName: "kube-api-access-wfbs6") pod "f354b100-f3b2-4369-b6de-637de12a35fb" (UID: "f354b100-f3b2-4369-b6de-637de12a35fb"). InnerVolumeSpecName "kube-api-access-wfbs6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.105627    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wfbs6\" (UniqueName: \"kubernetes.io/projected/f354b100-f3b2-4369-b6de-637de12a35fb-kube-api-access-wfbs6\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.815388    1497 scope.go:117] "RemoveContainer" containerID="33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea"
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.855459    1497 scope.go:117] "RemoveContainer" containerID="33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea"
	Aug 31 22:45:16 addons-926553 kubelet[1497]: E0831 22:45:16.861097    1497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea\": container with ID starting with 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea not found: ID does not exist" containerID="33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea"
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.861141    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea"} err="failed to get container status \"33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea\": rpc error: code = NotFound desc = could not find container \"33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea\": container with ID starting with 33089909b9ae9196ae44966dd513637f0c43528b3163034142ee8aa2aed3abea not found: ID does not exist"
	Aug 31 22:45:16 addons-926553 kubelet[1497]: I0831 22:45:16.861169    1497 scope.go:117] "RemoveContainer" containerID="253c8791474bd3529943d7e5e23f781a14fe0376240be8054563cbc06fdd132a"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.052127    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000dc781-4a18-4524-b73a-681e34eaa529" path="/var/lib/kubelet/pods/000dc781-4a18-4524-b73a-681e34eaa529/volumes"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.052555    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e851bb87-ad25-4276-910b-2eb567439f7a" path="/var/lib/kubelet/pods/e851bb87-ad25-4276-910b-2eb567439f7a/volumes"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.052788    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f354b100-f3b2-4369-b6de-637de12a35fb" path="/var/lib/kubelet/pods/f354b100-f3b2-4369-b6de-637de12a35fb/volumes"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.716191    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w25c7\" (UniqueName: \"kubernetes.io/projected/b31db60d-0b27-45db-bc2c-5455cc2c701d-kube-api-access-w25c7\") pod \"b31db60d-0b27-45db-bc2c-5455cc2c701d\" (UID: \"b31db60d-0b27-45db-bc2c-5455cc2c701d\") "
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.728766    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31db60d-0b27-45db-bc2c-5455cc2c701d-kube-api-access-w25c7" (OuterVolumeSpecName: "kube-api-access-w25c7") pod "b31db60d-0b27-45db-bc2c-5455cc2c701d" (UID: "b31db60d-0b27-45db-bc2c-5455cc2c701d"). InnerVolumeSpecName "kube-api-access-w25c7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.817396    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w25c7\" (UniqueName: \"kubernetes.io/projected/b31db60d-0b27-45db-bc2c-5455cc2c701d-kube-api-access-w25c7\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.823918    1497 scope.go:117] "RemoveContainer" containerID="505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.855128    1497 scope.go:117] "RemoveContainer" containerID="505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: E0831 22:45:17.855629    1497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822\": container with ID starting with 505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822 not found: ID does not exist" containerID="505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822"
	Aug 31 22:45:17 addons-926553 kubelet[1497]: I0831 22:45:17.855665    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822"} err="failed to get container status \"505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822\": rpc error: code = NotFound desc = could not find container \"505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822\": container with ID starting with 505b1ff847477c7842db1f1e447be5ce7b4014cacabf7251d3e27d5d1200b822 not found: ID does not exist"
	
	
	==> storage-provisioner [d4a4a18a5a7f6d6b98241bc922d29ac28c4b9779e5a615453b66ea70509523e8] <==
	I0831 22:34:15.733314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:34:15.907321       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:34:15.907562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:34:16.042095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:34:16.048020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	I0831 22:34:16.060065       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1090045-d7c1-4b36-83f3-943893f1aa8d", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0 became leader
	I0831 22:34:16.149026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-926553 -n addons-926553
helpers_test.go:262: (dbg) Run:  kubectl --context addons-926553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox headlamp-57fb76fcdb-7bd4m ingress-nginx-admission-create-pxdjc ingress-nginx-admission-patch-qsgmg
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-926553 describe pod busybox headlamp-57fb76fcdb-7bd4m ingress-nginx-admission-create-pxdjc ingress-nginx-admission-patch-qsgmg
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context addons-926553 describe pod busybox headlamp-57fb76fcdb-7bd4m ingress-nginx-admission-create-pxdjc ingress-nginx-admission-patch-qsgmg: exit status 1 (147.538159ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-926553/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:36:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-npklh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-npklh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-926553
	  Normal   Pulling    7m43s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m32s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x19 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-57fb76fcdb-7bd4m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-pxdjc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qsgmg" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context addons-926553 describe pod busybox headlamp-57fb76fcdb-7bd4m ingress-nginx-admission-create-pxdjc ingress-nginx-admission-patch-qsgmg: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-926553 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-926553 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-926553 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [4dbb82cf-1856-4f9a-a94d-fd3bb62b0b36] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [4dbb82cf-1856-4f9a-a94d-fd3bb62b0b36] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004093272s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-926553 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.485898783s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-926553 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 addons disable ingress --alsologtostderr -v=1: (7.864293357s)
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-926553
helpers_test.go:236: (dbg) docker inspect addons-926553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a",
	        "Created": "2024-08-31T22:32:58.142499264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:32:58.286853851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hosts",
	        "LogPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a-json.log",
	        "Name": "/addons-926553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-926553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-926553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424-init/diff:/var/lib/docker/overlay2/b65bd3df822a42b081e949f262147909a06a528615f1ebee5ca341285d3e7159/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-926553",
	                "Source": "/var/lib/docker/volumes/addons-926553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-926553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-926553",
	                "name.minikube.sigs.k8s.io": "addons-926553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "299f7cd903653354b274e148f6cb6a39ed6942891df3e3272bc94377e3fd800f",
	            "SandboxKey": "/var/run/docker/netns/299f7cd90365",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-926553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7a8828e69332b37e7bad00ea7f7da101018d986bdcdd9608e22ba654914df386",
	                    "EndpointID": "f81499bc432f0db4a48aaa2f7a33d2bce9def00a9f596d90ba418160f18b3dd7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-926553",
	                        "2b41c4e07f7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-926553 -n addons-926553
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 logs -n 25: (1.515226458s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-847558                                                                     | download-only-847558   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-030884                                                                     | download-only-030884   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | download-docker-718632                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-718632                                                                   | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | binary-mirror-123480                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44745                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-123480                                                                     | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-926553 --wait=true                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | -p addons-926553                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-926553 ssh cat                                                                       | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | /opt/local-path-provisioner/pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-926553 ip                                                                            | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | -p addons-926553                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-926553 ssh curl -s                                                                   | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-926553 ip                                                                            | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:32:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:32:33.055573  283957 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:33.055738  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.055749  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:33.055754  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.056034  283957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:32:33.056594  283957 out.go:352] Setting JSON to false
	I0831 22:32:33.057655  283957 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8101,"bootTime":1725135452,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:32:33.057748  283957 start.go:139] virtualization:  
	I0831 22:32:33.061311  283957 out.go:177] * [addons-926553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:32:33.065254  283957 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:32:33.065416  283957 notify.go:220] Checking for updates...
	I0831 22:32:33.070822  283957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:32:33.074065  283957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:32:33.076774  283957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:32:33.079454  283957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:32:33.082232  283957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:32:33.085445  283957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:32:33.116782  283957 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:32:33.116914  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.173707  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.16402705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.173832  283957 docker.go:307] overlay module found
	I0831 22:32:33.176642  283957 out.go:177] * Using the docker driver based on user configuration
	I0831 22:32:33.179170  283957 start.go:297] selected driver: docker
	I0831 22:32:33.179214  283957 start.go:901] validating driver "docker" against <nil>
	I0831 22:32:33.179232  283957 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:32:33.179877  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.244492  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.235116551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.244664  283957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:32:33.244891  283957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:32:33.247588  283957 out.go:177] * Using Docker driver with root privileges
	I0831 22:32:33.250073  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:32:33.250100  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:32:33.250112  283957 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:32:33.250206  283957 start.go:340] cluster config:
	{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:32:33.253061  283957 out.go:177] * Starting "addons-926553" primary control-plane node in "addons-926553" cluster
	I0831 22:32:33.255456  283957 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 22:32:33.258049  283957 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:32:33.260597  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:33.260655  283957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:33.260667  283957 cache.go:56] Caching tarball of preloaded images
	I0831 22:32:33.260691  283957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:32:33.260749  283957 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 22:32:33.260760  283957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:32:33.261148  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:32:33.261182  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json: {Name:mkdfcbbb034ebf13d0c934d3b8bb6283f2353c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:32:33.276646  283957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:33.276792  283957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:32:33.276818  283957 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:32:33.276823  283957 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:32:33.276832  283957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:32:33.276842  283957 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:32:50.926792  283957 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:32:50.926833  283957 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:32:50.926891  283957 start.go:360] acquireMachinesLock for addons-926553: {Name:mk45b5d2bdf6c02f40299229aa5af77faafa98b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:32:50.927022  283957 start.go:364] duration metric: took 106.732µs to acquireMachinesLock for "addons-926553"
	I0831 22:32:50.927053  283957 start.go:93] Provisioning new machine with config: &{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:32:50.927149  283957 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:32:50.929291  283957 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:32:50.929542  283957 start.go:159] libmachine.API.Create for "addons-926553" (driver="docker")
	I0831 22:32:50.929577  283957 client.go:168] LocalClient.Create starting
	I0831 22:32:50.929688  283957 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem
	I0831 22:32:51.568232  283957 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem
	I0831 22:32:51.959805  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:32:51.976476  283957 cli_runner.go:211] docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:32:51.976564  283957 network_create.go:284] running [docker network inspect addons-926553] to gather additional debugging logs...
	I0831 22:32:51.976587  283957 cli_runner.go:164] Run: docker network inspect addons-926553
	W0831 22:32:51.998246  283957 cli_runner.go:211] docker network inspect addons-926553 returned with exit code 1
	I0831 22:32:51.998286  283957 network_create.go:287] error running [docker network inspect addons-926553]: docker network inspect addons-926553: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-926553 not found
	I0831 22:32:51.998301  283957 network_create.go:289] output of [docker network inspect addons-926553]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-926553 not found
	
	** /stderr **
	I0831 22:32:51.998418  283957 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:32:52.020066  283957 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017aa870}
	I0831 22:32:52.020113  283957 network_create.go:124] attempt to create docker network addons-926553 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:32:52.020180  283957 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-926553 addons-926553
	I0831 22:32:52.103358  283957 network_create.go:108] docker network addons-926553 192.168.49.0/24 created
	I0831 22:32:52.103398  283957 kic.go:121] calculated static IP "192.168.49.2" for the "addons-926553" container
	I0831 22:32:52.103481  283957 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:32:52.117925  283957 cli_runner.go:164] Run: docker volume create addons-926553 --label name.minikube.sigs.k8s.io=addons-926553 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:32:52.134920  283957 oci.go:103] Successfully created a docker volume addons-926553
	I0831 22:32:52.135011  283957 cli_runner.go:164] Run: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:32:53.917914  283957 cli_runner.go:217] Completed: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (1.78286744s)
	I0831 22:32:53.917946  283957 oci.go:107] Successfully prepared a docker volume addons-926553
	I0831 22:32:53.917968  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:53.917988  283957 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:32:53.918085  283957 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:32:58.069694  283957 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.151551571s)
	I0831 22:32:58.069731  283957 kic.go:203] duration metric: took 4.15173909s to extract preloaded images to volume ...
	W0831 22:32:58.069874  283957 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:32:58.069992  283957 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:32:58.127293  283957 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-926553 --name addons-926553 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-926553 --network addons-926553 --ip 192.168.49.2 --volume addons-926553:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:32:58.451756  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Running}}
	I0831 22:32:58.471081  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.493141  283957 cli_runner.go:164] Run: docker exec addons-926553 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:32:58.579570  283957 oci.go:144] the created container "addons-926553" has a running status.
	I0831 22:32:58.579597  283957 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa...
	I0831 22:32:58.856139  283957 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:32:58.888353  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.918856  283957 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:32:58.918881  283957 kic_runner.go:114] Args: [docker exec --privileged addons-926553 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:32:58.994745  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:59.020659  283957 machine.go:93] provisionDockerMachine start ...
	I0831 22:32:59.020755  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:32:59.042776  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:32:59.043049  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:32:59.043065  283957 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:32:59.043777  283957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 22:33:02.183965  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.183992  283957 ubuntu.go:169] provisioning hostname "addons-926553"
	I0831 22:33:02.184057  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.201134  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.201387  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.201404  283957 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-926553 && echo "addons-926553" | sudo tee /etc/hostname
	I0831 22:33:02.349789  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.349888  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.372048  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.372306  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.372323  283957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-926553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-926553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-926553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:33:02.504705  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:33:02.504736  283957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 22:33:02.504768  283957 ubuntu.go:177] setting up certificates
	I0831 22:33:02.504779  283957 provision.go:84] configureAuth start
	I0831 22:33:02.504849  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:02.523280  283957 provision.go:143] copyHostCerts
	I0831 22:33:02.523372  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 22:33:02.523504  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 22:33:02.523567  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 22:33:02.523620  283957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.addons-926553 san=[127.0.0.1 192.168.49.2 addons-926553 localhost minikube]
	I0831 22:33:02.933713  283957 provision.go:177] copyRemoteCerts
	I0831 22:33:02.933792  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:33:02.933842  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.950418  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.053745  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:33:03.085010  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:33:03.111911  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:33:03.138695  283957 provision.go:87] duration metric: took 633.893833ms to configureAuth
	I0831 22:33:03.138724  283957 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:33:03.138976  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:03.139098  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.157231  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:03.157489  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:03.157510  283957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:33:03.395474  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:33:03.395500  283957 machine.go:96] duration metric: took 4.374820866s to provisionDockerMachine
	I0831 22:33:03.395511  283957 client.go:171] duration metric: took 12.46592371s to LocalClient.Create
	I0831 22:33:03.395523  283957 start.go:167] duration metric: took 12.465982753s to libmachine.API.Create "addons-926553"
	I0831 22:33:03.395532  283957 start.go:293] postStartSetup for "addons-926553" (driver="docker")
	I0831 22:33:03.395543  283957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:33:03.395618  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:33:03.395665  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.414120  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.513743  283957 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:33:03.517073  283957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:33:03.517108  283957 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:33:03.517137  283957 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:33:03.517155  283957 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:33:03.517165  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 22:33:03.517246  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 22:33:03.517272  283957 start.go:296] duration metric: took 121.734053ms for postStartSetup
	I0831 22:33:03.517586  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.539317  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:33:03.539619  283957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:33:03.539672  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.556680  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.650277  283957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:33:03.654747  283957 start.go:128] duration metric: took 12.727579827s to createHost
	I0831 22:33:03.654772  283957 start.go:83] releasing machines lock for "addons-926553", held for 12.727737422s
	I0831 22:33:03.654860  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.672628  283957 ssh_runner.go:195] Run: cat /version.json
	I0831 22:33:03.672710  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.673358  283957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:33:03.673442  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.697266  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.710029  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.795932  283957 ssh_runner.go:195] Run: systemctl --version
	I0831 22:33:03.930195  283957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:33:04.071340  283957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:33:04.075814  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.099545  283957 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:33:04.099629  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.136429  283957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:33:04.136452  283957 start.go:495] detecting cgroup driver to use...
	I0831 22:33:04.136490  283957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:33:04.136563  283957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:33:04.152782  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:33:04.164726  283957 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:33:04.164790  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:33:04.179068  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:33:04.193725  283957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:33:04.288369  283957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:33:04.384337  283957 docker.go:233] disabling docker service ...
	I0831 22:33:04.384478  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:33:04.405127  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:33:04.417339  283957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:33:04.502240  283957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:33:04.591263  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:33:04.604121  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:33:04.621501  283957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:33:04.621615  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.632529  283957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:33:04.632622  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.642518  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.652512  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.663605  283957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:33:04.672528  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.682613  283957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.698852  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.708709  283957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:33:04.716981  283957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:33:04.725394  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:04.831046  283957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:33:04.953766  283957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:33:04.953873  283957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:33:04.958520  283957 start.go:563] Will wait 60s for crictl version
	I0831 22:33:04.958584  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:33:04.962128  283957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:33:04.997059  283957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 22:33:04.997167  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.045856  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.092004  283957 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 22:33:05.094977  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:33:05.112048  283957 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:33:05.116110  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.128026  283957 kubeadm.go:883] updating cluster {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:33:05.128170  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:33:05.128234  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.208377  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.208421  283957 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:33:05.208479  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.246065  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.246089  283957 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:33:05.246099  283957 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0831 22:33:05.246205  283957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-926553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:33:05.246297  283957 ssh_runner.go:195] Run: crio config
	I0831 22:33:05.292734  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:05.292759  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:05.292771  283957 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:33:05.292794  283957 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-926553 NodeName:addons-926553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:33:05.293025  283957 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-926553"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:33:05.293106  283957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:33:05.302182  283957 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:33:05.302257  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:33:05.311092  283957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 22:33:05.329236  283957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:33:05.347791  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0831 22:33:05.366848  283957 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:33:05.370373  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.381457  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:05.465768  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:05.479694  283957 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553 for IP: 192.168.49.2
	I0831 22:33:05.479717  283957 certs.go:194] generating shared ca certs ...
	I0831 22:33:05.479733  283957 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:05.479864  283957 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 22:33:06.370705  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt ...
	I0831 22:33:06.370800  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt: {Name:mk127fa4684d9b07fbbfe78fd379ac7f2858784d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.371022  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key ...
	I0831 22:33:06.371065  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key: {Name:mkaa1c85c29bc9b8e67687de42c28210df6897ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.372603  283957 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 22:33:06.601904  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt ...
	I0831 22:33:06.601936  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt: {Name:mkdc81b529896f489764dcced8efa122bc80e6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602125  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key ...
	I0831 22:33:06.602138  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key: {Name:mkd36c32182ba675bb26d2d1c2420f0531884885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602761  283957 certs.go:256] generating profile certs ...
	I0831 22:33:06.602831  283957 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key
	I0831 22:33:06.602851  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt with IP's: []
	I0831 22:33:07.200696  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt ...
	I0831 22:33:07.200743  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: {Name:mk55d73b23a418e158fddd2a2029982fed955c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.200943  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key ...
	I0831 22:33:07.200989  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key: {Name:mk59a6767b126a801e3c15dd1fd3a3348aa14ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.201084  283957 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3
	I0831 22:33:07.201105  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:33:07.643963  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 ...
	I0831 22:33:07.643994  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3: {Name:mk8845045369642c2652f6024489c05d54865b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644178  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 ...
	I0831 22:33:07.644191  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3: {Name:mk69db76c63a333ce273b6b1150f927c3534bc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644723  283957 certs.go:381] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt
	I0831 22:33:07.644822  283957 certs.go:385] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key
	I0831 22:33:07.644885  283957 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key
	I0831 22:33:07.644904  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt with IP's: []
	I0831 22:33:07.769112  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt ...
	I0831 22:33:07.769146  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt: {Name:mk709a4df7e86ad0190ea4e7918008cb10101a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769717  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key ...
	I0831 22:33:07.769737  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key: {Name:mk55ab13960a2f23e6e30c97ac70318ef038cdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769938  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 22:33:07.769982  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:33:07.770019  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:33:07.770046  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 22:33:07.770668  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:33:07.796259  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:33:07.828503  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:33:07.867326  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:33:07.892900  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:33:07.917006  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:33:07.941026  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:33:07.964770  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:33:07.989226  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:33:08.021885  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:33:08.053952  283957 ssh_runner.go:195] Run: openssl version
	I0831 22:33:08.060101  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:33:08.070747  283957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074388  283957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074466  283957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.082225  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:33:08.092117  283957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:33:08.095591  283957 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:33:08.095645  283957 kubeadm.go:392] StartCluster: {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:33:08.095732  283957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:33:08.095788  283957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:33:08.141952  283957 cri.go:89] found id: ""
	I0831 22:33:08.142024  283957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:33:08.151170  283957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:33:08.160571  283957 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:33:08.160636  283957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:33:08.169922  283957 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:33:08.169943  283957 kubeadm.go:157] found existing configuration files:
	
	I0831 22:33:08.170003  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:33:08.178997  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:33:08.179084  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:33:08.187643  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:33:08.196349  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:33:08.196437  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:33:08.205030  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.213907  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:33:08.213994  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.222476  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:33:08.231658  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:33:08.231726  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:33:08.240283  283957 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:33:08.279889  283957 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:33:08.280060  283957 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:33:08.302891  283957 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:33:08.302989  283957 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0831 22:33:08.303047  283957 kubeadm.go:310] OS: Linux
	I0831 22:33:08.303109  283957 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:33:08.303175  283957 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:33:08.303241  283957 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:33:08.303307  283957 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:33:08.303382  283957 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:33:08.303472  283957 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:33:08.303576  283957 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:33:08.303659  283957 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:33:08.303742  283957 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:33:08.375106  283957 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:33:08.375280  283957 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:33:08.375404  283957 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:33:08.381947  283957 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:33:08.385255  283957 out.go:235]   - Generating certificates and keys ...
	I0831 22:33:08.385428  283957 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:33:08.385523  283957 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:33:08.637437  283957 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:33:09.463131  283957 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:33:10.033346  283957 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:33:10.906857  283957 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:33:11.453764  283957 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:33:11.454108  283957 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.062393  283957 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:33:12.062743  283957 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.309286  283957 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:33:12.573925  283957 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:33:12.914344  283957 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:33:12.914632  283957 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:33:13.308464  283957 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:33:13.644764  283957 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:33:14.238434  283957 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:33:14.678365  283957 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:33:15.169684  283957 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:33:15.170746  283957 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:33:15.174253  283957 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:33:15.177263  283957 out.go:235]   - Booting up control plane ...
	I0831 22:33:15.177380  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:33:15.177460  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:33:15.178516  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:33:15.190024  283957 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:33:15.196959  283957 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:33:15.197061  283957 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:33:15.294087  283957 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:33:15.294208  283957 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:33:16.295207  283957 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00118568s
	I0831 22:33:16.295299  283957 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:33:22.297225  283957 kubeadm.go:310] [api-check] The API server is healthy after 6.002301756s
	I0831 22:33:22.317717  283957 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:33:22.333223  283957 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:33:22.356793  283957 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:33:22.356989  283957 kubeadm.go:310] [mark-control-plane] Marking the node addons-926553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:33:22.368934  283957 kubeadm.go:310] [bootstrap-token] Using token: bpizuk.5bt7ue9fr9w4aczf
	I0831 22:33:22.373429  283957 out.go:235]   - Configuring RBAC rules ...
	I0831 22:33:22.373568  283957 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:33:22.379902  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:33:22.391608  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:33:22.397570  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:33:22.401429  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:33:22.405725  283957 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:33:22.704690  283957 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:33:23.180935  283957 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:33:23.704316  283957 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:33:23.707745  283957 kubeadm.go:310] 
	I0831 22:33:23.707828  283957 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:33:23.707837  283957 kubeadm.go:310] 
	I0831 22:33:23.707924  283957 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:33:23.707936  283957 kubeadm.go:310] 
	I0831 22:33:23.707962  283957 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:33:23.708048  283957 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:33:23.708128  283957 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:33:23.708138  283957 kubeadm.go:310] 
	I0831 22:33:23.708191  283957 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:33:23.708200  283957 kubeadm.go:310] 
	I0831 22:33:23.708251  283957 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:33:23.708259  283957 kubeadm.go:310] 
	I0831 22:33:23.708311  283957 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:33:23.708384  283957 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:33:23.708476  283957 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:33:23.708490  283957 kubeadm.go:310] 
	I0831 22:33:23.708572  283957 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:33:23.708648  283957 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:33:23.708655  283957 kubeadm.go:310] 
	I0831 22:33:23.708737  283957 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.708860  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c \
	I0831 22:33:23.708888  283957 kubeadm.go:310] 	--control-plane 
	I0831 22:33:23.708893  283957 kubeadm.go:310] 
	I0831 22:33:23.708977  283957 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:33:23.708982  283957 kubeadm.go:310] 
	I0831 22:33:23.709068  283957 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.709169  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c 
	I0831 22:33:23.712617  283957 kubeadm.go:310] W0831 22:33:08.276569    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.712923  283957 kubeadm.go:310] W0831 22:33:08.277503    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.713163  283957 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0831 22:33:23.713299  283957 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:33:23.713314  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:23.713322  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:23.716282  283957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:33:23.719220  283957 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:33:23.723271  283957 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:33:23.723293  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:33:23.741607  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:33:24.052823  283957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:33:24.052918  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.052970  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-926553 minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-926553 minikube.k8s.io/primary=true
	I0831 22:33:24.230141  283957 ops.go:34] apiserver oom_adj: -16
	I0831 22:33:24.230269  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.730397  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.230993  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.730610  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.230407  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.730761  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.231064  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.730886  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.230560  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.321873  283957 kubeadm.go:1113] duration metric: took 4.26902395s to wait for elevateKubeSystemPrivileges
	I0831 22:33:28.321901  283957 kubeadm.go:394] duration metric: took 20.226260277s to StartCluster
	I0831 22:33:28.321917  283957 settings.go:142] acquiring lock: {Name:mkadbc7d53c5858a38d57ec152e52037ebee242b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.322035  283957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:33:28.322400  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.323046  283957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:33:28.323174  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:33:28.323438  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.323475  283957 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:33:28.323555  283957 addons.go:69] Setting yakd=true in profile "addons-926553"
	I0831 22:33:28.323574  283957 addons.go:234] Setting addon yakd=true in "addons-926553"
	I0831 22:33:28.323597  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.324068  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.324839  283957 addons.go:69] Setting cloud-spanner=true in profile "addons-926553"
	I0831 22:33:28.324866  283957 addons.go:234] Setting addon cloud-spanner=true in "addons-926553"
	I0831 22:33:28.324890  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.325338  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.325583  283957 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-926553"
	I0831 22:33:28.325617  283957 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-926553"
	I0831 22:33:28.325650  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326088  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.326414  283957 addons.go:69] Setting registry=true in profile "addons-926553"
	I0831 22:33:28.326440  283957 addons.go:234] Setting addon registry=true in "addons-926553"
	I0831 22:33:28.326465  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326854  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.329500  283957 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-926553"
	I0831 22:33:28.329573  283957 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:28.329606  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.330028  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342354  283957 addons.go:69] Setting default-storageclass=true in profile "addons-926553"
	I0831 22:33:28.342397  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-926553"
	I0831 22:33:28.342712  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342891  283957 addons.go:69] Setting storage-provisioner=true in profile "addons-926553"
	I0831 22:33:28.342929  283957 addons.go:234] Setting addon storage-provisioner=true in "addons-926553"
	I0831 22:33:28.342990  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.349869  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.360101  283957 addons.go:69] Setting gcp-auth=true in profile "addons-926553"
	I0831 22:33:28.360166  283957 mustload.go:65] Loading cluster: addons-926553
	I0831 22:33:28.360443  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.360907  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.366186  283957 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-926553"
	I0831 22:33:28.366367  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-926553"
	I0831 22:33:28.366876  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.375213  283957 addons.go:69] Setting ingress=true in profile "addons-926553"
	I0831 22:33:28.375277  283957 addons.go:234] Setting addon ingress=true in "addons-926553"
	I0831 22:33:28.375340  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.376302  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.380625  283957 addons.go:69] Setting volcano=true in profile "addons-926553"
	I0831 22:33:28.380724  283957 addons.go:234] Setting addon volcano=true in "addons-926553"
	I0831 22:33:28.380800  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.381420  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.394994  283957 addons.go:69] Setting ingress-dns=true in profile "addons-926553"
	I0831 22:33:28.395035  283957 addons.go:234] Setting addon ingress-dns=true in "addons-926553"
	I0831 22:33:28.395105  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.395705  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.401028  283957 addons.go:69] Setting volumesnapshots=true in profile "addons-926553"
	I0831 22:33:28.401089  283957 addons.go:234] Setting addon volumesnapshots=true in "addons-926553"
	I0831 22:33:28.401140  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.401758  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.402549  283957 out.go:177] * Verifying Kubernetes components...
	I0831 22:33:28.428671  283957 addons.go:69] Setting inspektor-gadget=true in profile "addons-926553"
	I0831 22:33:28.428730  283957 addons.go:234] Setting addon inspektor-gadget=true in "addons-926553"
	I0831 22:33:28.428784  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.429708  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.482979  283957 addons.go:69] Setting metrics-server=true in profile "addons-926553"
	I0831 22:33:28.483022  283957 addons.go:234] Setting addon metrics-server=true in "addons-926553"
	I0831 22:33:28.483067  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.483527  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.544912  283957 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:33:28.556676  283957 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:33:28.590337  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:28.592737  283957 addons.go:234] Setting addon default-storageclass=true in "addons-926553"
	I0831 22:33:28.592814  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.593533  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.602703  283957 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:33:28.613912  283957 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:33:28.616792  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:33:28.616842  283957 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:33:28.616938  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.642744  283957 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:33:28.642778  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:33:28.642878  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.643214  283957 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:33:28.643541  283957 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:28.643555  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:33:28.643629  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.673321  283957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:33:28.673653  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:33:28.676085  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:28.676117  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:33:28.676204  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.680157  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:33:28.682935  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0831 22:33:28.685152  283957 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:33:28.688167  283957 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:28.688191  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:33:28.688265  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.711171  283957 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:33:28.711327  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:33:28.711525  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:33:28.713867  283957 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:33:28.713897  283957 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:33:28.714009  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.716525  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:33:28.716567  283957 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:33:28.716656  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.730720  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:33:28.736851  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.740033  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.742712  283957 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:33:28.743079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:28.743093  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:33:28.743174  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.743485  283957 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:33:28.743695  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.746617  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:33:28.746887  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:33:28.746921  283957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:33:28.746978  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.780079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:28.780109  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:33:28.780197  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.789453  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:33:28.790925  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:33:28.793675  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:33:28.799676  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:33:28.803611  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:33:28.803643  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:33:28.803743  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.812459  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.863289  283957 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-926553"
	I0831 22:33:28.863352  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.863920  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.868300  283957 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:28.868325  283957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:33:28.868620  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.883317  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.949248  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.960803  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.002979  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.003648  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.046634  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.047268  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.055178  283957 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:33:29.055568  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.061509  283957 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:33:29.064558  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.064583  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:33:29.064648  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:29.066321  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.088665  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.089600  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.108712  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.442641  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:33:29.442677  283957 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:33:29.526255  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:29.530947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:29.533064  283957 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:33:29.533105  283957 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:33:29.534377  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:29.596562  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:33:29.596599  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:33:29.613705  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:29.630607  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:29.647426  283957 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:33:29.647458  283957 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:33:29.653219  283957 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.653263  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:33:29.657539  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:29.660517  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:33:29.660568  283957 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:33:29.663695  283957 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.073312217s)
	I0831 22:33:29.663842  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:29.666078  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.710627  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:33:29.710667  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:33:29.735336  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:33:29.735373  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:33:29.784696  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:33:29.784736  283957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:33:29.855640  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.877050  283957 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:33:29.877099  283957 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:33:29.904145  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:33:29.904181  283957 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:33:29.911988  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:33:29.912025  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:33:29.945936  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:33:29.945983  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:33:29.979809  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:29.979844  283957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:33:30.081879  283957 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:33:30.081924  283957 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:33:30.094433  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:33:30.094470  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:33:30.121606  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:33:30.121648  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:33:30.147465  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.147495  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:33:30.194891  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:30.335224  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:33:30.335253  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:33:30.357845  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.370585  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:33:30.370614  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:33:30.380434  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:33:30.380480  283957 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:33:30.470604  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:33:30.470632  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:33:30.474717  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:33:30.474743  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:33:30.480308  283957 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.480332  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:33:30.551614  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:33:30.551645  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:33:30.555526  283957 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:33:30.555551  283957 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:33:30.572488  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.626735  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:33:30.626772  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:33:30.659143  283957 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.659168  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:33:30.708306  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:33:30.708339  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:33:30.751947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.779486  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:33:30.779512  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:33:30.883168  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:30.883208  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:33:31.034072  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:32.347271  283957 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.557782579s)
	I0831 22:33:32.347302  283957 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:33:32.348296  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.821995008s)
	I0831 22:33:33.626333  283957 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-926553" context rescaled to 1 replicas
	I0831 22:33:34.257701  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.726689759s)
	I0831 22:33:34.257836  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.72343282s)
	I0831 22:33:35.750704  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.136952511s)
	I0831 22:33:35.750778  283957 addons.go:475] Verifying addon ingress=true in "addons-926553"
	I0831 22:33:35.750934  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.120293953s)
	I0831 22:33:35.751173  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.093604816s)
	I0831 22:33:35.751232  283957 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.087374745s)
	I0831 22:33:35.751352  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.085238138s)
	I0831 22:33:35.751534  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.89586536s)
	I0831 22:33:35.751557  283957 addons.go:475] Verifying addon registry=true in "addons-926553"
	I0831 22:33:35.752026  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.557102507s)
	I0831 22:33:35.752048  283957 addons.go:475] Verifying addon metrics-server=true in "addons-926553"
	I0831 22:33:35.752087  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.394213157s)
	I0831 22:33:35.752485  283957 node_ready.go:35] waiting up to 6m0s for node "addons-926553" to be "Ready" ...
	I0831 22:33:35.753392  283957 out.go:177] * Verifying ingress addon...
	I0831 22:33:35.753389  283957 out.go:177] * Verifying registry addon...
	I0831 22:33:35.755348  283957 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-926553 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:33:35.757767  283957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:33:35.757777  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:33:35.790994  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:33:35.791083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:35.805166  283957 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:33:35.805241  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:35.829747  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.25721674s)
	W0831 22:33:35.830055  283957 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.830104  283957 retry.go:31] will retry after 224.217796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.829894  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.077890025s)
	W0831 22:33:35.831762  283957 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0831 22:33:36.055322  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:36.093372  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.059244383s)
	I0831 22:33:36.093453  283957 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:36.096487  283957 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:33:36.100111  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:33:36.115482  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:33:36.115552  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.263976  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.265062  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:36.604587  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.787244  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.788063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.104478  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.265822  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.267285  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.604559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.756432  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:37.765094  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.766439  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.119367  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.262590  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.263797  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.604697  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.763609  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.764734  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.104910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.268592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.269044  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.283217  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.227800168s)
	I0831 22:33:39.608539  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.699330  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:33:39.699446  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.716174  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.763187  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.763930  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.822207  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:33:39.846744  283957 addons.go:234] Setting addon gcp-auth=true in "addons-926553"
	I0831 22:33:39.846795  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:39.847250  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:39.875523  283957 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:33:39.875573  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.898490  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.991759  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:39.994410  283957 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:33:39.996970  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:33:39.996996  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:33:40.029853  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:33:40.029886  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:33:40.054335  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.054357  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:33:40.077923  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.110092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.256734  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:40.262779  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.263788  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:40.618485  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.635469  283957 addons.go:475] Verifying addon gcp-auth=true in "addons-926553"
	I0831 22:33:40.638250  283957 out.go:177] * Verifying gcp-auth addon...
	I0831 22:33:40.641904  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:33:40.717924  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:33:40.717949  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:40.760808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.761737  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.103631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.147102  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.261846  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.262577  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.605179  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.645765  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.762543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.764051  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.105362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.148283  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.258237  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:42.263214  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:42.264306  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.604818  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.646007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.762250  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.762606  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.103968  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.145529  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.261669  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.262507  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:43.603902  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.645804  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.762089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.762820  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.104008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.145229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.261278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.262098  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.604072  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.645225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.755790  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:44.762238  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.763675  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.119481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.151439  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.262923  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.263585  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.603923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.645062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.762245  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.763179  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.103665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.145991  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.262108  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.262871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.603987  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.645848  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.755967  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:46.762356  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.763040  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.103999  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.145133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.265067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.265999  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.604241  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.645521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.761239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.762226  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.104502  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.261871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.262973  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.604572  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.762598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.763120  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.104271  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.145932  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.256720  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:49.262226  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:49.263641  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.604683  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.645947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.761803  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.762015  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.103842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.145422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.261604  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.262384  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.604492  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.645631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.762236  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.762361  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.104382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.145709  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.261382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.262159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:51.604037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.645599  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.756631  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:51.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.762943  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.103840  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.146303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.260993  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.262050  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.604518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.645695  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.762149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.762978  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.104308  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.145453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.262149  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.262946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.604459  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.762137  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.762727  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.104542  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.145161  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.255923  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:54.262062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.263015  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.603912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.645424  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.763411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.104967  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.145553  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.262546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.262785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.604748  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.645583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.761826  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.763402  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.105089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.146463  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.256974  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:56.263076  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.263723  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.606473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.647735  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.764781  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.766164  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.104318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.146098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.269923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.271223  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.604825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.645919  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.763180  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.763592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.104174  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.145739  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.261942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.262811  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.603886  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.645351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.757020  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:58.761675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.763460  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.104110  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.145526  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.262377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.262612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.604341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.645727  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.762980  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.136282  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.175701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.297607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.298427  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.605093  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.645870  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.757140  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:00.762169  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.763557  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.104348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.146225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.261098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.262282  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.603884  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.645426  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.762105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.762957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.104192  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.145434  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.262134  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.262894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.603513  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.645138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.762333  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.763186  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.104291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.145545  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.256690  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:03.262509  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.263063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.604219  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.761550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.763199  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.103986  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.145092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.260906  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:04.261910  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.604129  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.645678  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.762705  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.762793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.104713  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.145523  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.262711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.263142  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.603656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.645384  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.756593  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:05.762220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.762442  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.104276  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.145977  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.263109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.264246  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:06.605053  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.645105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.762593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.104549  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.146001  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.262265  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.262528  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:07.603862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.645120  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.762233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.762720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.104365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.145901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.256630  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:08.262630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.263422  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:08.603598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.645197  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.761304  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.762056  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.104651  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.145806  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.262057  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.262888  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.604550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.645470  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.762054  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.763110  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.104284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.257522  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:10.261362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.261939  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.604131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.646061  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.761374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.762267  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.104686  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.145067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.262003  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.262977  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.603815  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.645555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.762188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.762588  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.104640  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.261659  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:12.262461  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.604373  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.645942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.757266  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:12.762383  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.762661  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.103567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.146021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.262280  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.262859  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:13.604082  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.650984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.761311  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.762021  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.104043  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.145580  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.261335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:14.262064  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.603947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.646679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.762765  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.762778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.117766  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.153240  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.267487  283957 node_ready.go:49] node "addons-926553" has status "Ready":"True"
	I0831 22:34:15.267564  283957 node_ready.go:38] duration metric: took 39.514789095s for node "addons-926553" to be "Ready" ...
	I0831 22:34:15.267592  283957 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:34:15.275732  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:34:15.275809  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.276442  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.280854  283957 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:15.629987  283957 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:34:15.630065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.668193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.787659  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.789778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.105852  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.145825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.278884  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.280021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.605440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.645318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.762858  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.765023  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.787617  283957 pod_ready.go:93] pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.787642  283957 pod_ready.go:82] duration metric: took 1.506753163s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.787677  283957 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794111  283957 pod_ready.go:93] pod "etcd-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.794139  283957 pod_ready.go:82] duration metric: took 6.444642ms for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794155  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799542  283957 pod_ready.go:93] pod "kube-apiserver-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.799569  283957 pod_ready.go:82] duration metric: took 5.386535ms for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799580  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806679  283957 pod_ready.go:93] pod "kube-controller-manager-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.806707  283957 pod_ready.go:82] duration metric: took 7.118805ms for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806721  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857188  283957 pod_ready.go:93] pod "kube-proxy-2x2mt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.857218  283957 pod_ready.go:82] duration metric: took 50.489915ms for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857230  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.105581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.146191  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.258600  283957 pod_ready.go:93] pod "kube-scheduler-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:17.258669  283957 pod_ready.go:82] duration metric: took 401.429253ms for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.258694  283957 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.261667  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:17.262687  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.604862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.646272  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.764936  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.765793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.107931  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.207202  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.302173  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.302637  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.606559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.646357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.775904  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.780122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.110151  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.146402  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.272716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.275660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.278834  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:19.607312  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.646989  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.764462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.765340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.108138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.158436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.265037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.265857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.607204  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.649365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.766184  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.766778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.117188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.264649  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.267229  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.605997  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.646189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.769252  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.776432  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.779045  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:22.105797  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.205291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.270938  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.272159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:22.606720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.645319  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.768212  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.769045  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.105481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.146025  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.264716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.266628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.604946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.645376  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.766158  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.767067  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.797542  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:24.105732  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.147335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.266279  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.267261  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:24.606800  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.646677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.766259  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.767462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.106518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.205453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.314730  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.316362  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.607028  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.650341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.770511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.773834  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.104894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.145895  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.263752  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.265016  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:26.267354  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:26.605178  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.645897  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.767644  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.768292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.105737  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.145850  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.264918  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.265889  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:27.605106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.645943  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.764477  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.766607  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.107629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.207239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.263084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.264194  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.605775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.646375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.762388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.764546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.767472  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:29.106278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.146524  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.265912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.268490  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:29.605745  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.646867  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.765756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.772314  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.122548  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.148292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.279047  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.280259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.604607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.645863  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.765718  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.766955  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.770653  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:31.107084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.145821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.265330  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.266346  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:31.606351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.646041  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.762658  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.765467  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.105934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.145601  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.264777  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.266337  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.605229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.646223  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.774989  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:32.785303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.785784  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.105083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.146512  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.263890  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.265992  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.606498  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.645662  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.763811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.764819  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.105423  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.145701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.266956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.269628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.605901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.645149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.763985  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.765038  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.112775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.147243  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.271686  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.273029  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.277897  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:35.605975  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.645757  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.764098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.764377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.106052  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.146574  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.266738  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.269371  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.605156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.766567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.767069  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.105482  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.145408  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.262842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.264940  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.605630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.645579  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.763903  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.764638  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.768030  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:38.105602  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.145844  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.279984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:38.281288  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.606189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.645328  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.766976  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.768517  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.107588  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.145837  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.267811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.269043  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.604990  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.645894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.764577  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.765987  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.783324  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:40.110946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.149038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.263916  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:40.264452  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.605702  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.646035  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.762583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.765830  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.104722  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.146251  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.267893  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.270170  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.605079  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.766428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.767660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.108325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.152284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.277162  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:42.278233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.280340  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:42.605427  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.645085  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.764212  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.764388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.105237  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.145656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.264399  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:43.265176  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.605756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.646160  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.767679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.777857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.106039  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.146446  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.299193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.309733  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.326060  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:44.605473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.645672  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.763034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.764053  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.111264  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.159920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.269565  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.270011  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.605305  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.646239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.778410  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.779825  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.104643  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.146156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.264631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.267013  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.622647  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.646343  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.764083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.765335  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.769473  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:47.105381  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.145795  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.263471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.265096  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:47.605821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.646133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.763675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.765088  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.105731  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.146388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.277910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.279115  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.607534  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.646422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.771915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.773860  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.783304  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:49.105357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.265098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.266325  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:49.606355  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.645828  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.775820  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.779206  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.107042  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.146396  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.265357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.268892  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.606663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.649461  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.766106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.768357  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.106471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.145827  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.263868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.273856  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.276035  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:51.605984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.646501  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.770956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.775016  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.105268  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.145877  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.263405  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.606281  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.646369  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.774325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.775093  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.106374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.146473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.267665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:53.269478  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.276369  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:53.607786  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.705941  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.808463  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.808930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.106742  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.146131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.262778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.263743  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:54.605780  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.645489  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.763543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.764691  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.105073  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.146671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.263581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.264593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.604808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.645627  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.765957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.767629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.774463  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:56.106436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.147428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.274490  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.276298  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:56.606475  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.663836  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.768576  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.770804  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.105671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.146711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.264259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.270150  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.607038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.645905  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.766741  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.769544  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.777959  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:58.105648  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.146227  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.265054  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:58.265762  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.605480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.646483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.766211  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.768130  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.105789  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.146597  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.265677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.269145  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.605347  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.645340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.765278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.767138  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.159210  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.164293  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.328550  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.329744  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.335942  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:00.606480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.647703  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.763533  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.765948  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.106323  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.146291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.264390  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.265200  283957 kapi.go:107] duration metric: took 1m25.507422226s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:35:01.612483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.646438  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.767506  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.106814  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.206008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.262315  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.606382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.645915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.764109  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.766427  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:03.105521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.145663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.262337  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:03.605065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.763085  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.105575  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.146506  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.265127  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.622274  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.650220  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.763587  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.771154  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:05.107755  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.146930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.263894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:05.605375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.645868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.764781  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.105494  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.146233  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.262353  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.609706  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.646514  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.766654  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.105395  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.147002  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.265286  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.269347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:07.605980  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.645479  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.766524  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.105796  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.146353  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.280220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.606605  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.645535  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.764454  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.105835  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.145440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.262310  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.605511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.646558  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.765787  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.767713  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:10.107122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.146046  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.271694  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:10.606278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.645926  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.767543  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.106465  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.150614  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.263411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.610421  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.653984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.768938  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.105749  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.205140  283957 kapi.go:107] duration metric: took 1m31.563232697s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:35:12.208102  283957 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-926553 cluster.
	I0831 22:35:12.210660  283957 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:35:12.213274  283957 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:35:12.264022  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.265955  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:12.604934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.763295  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.105032  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.262133  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.606171  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.764828  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.106701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.261801  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.604865  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.765083  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:14.771193  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.110555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.271540  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.605431  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.766094  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.110167  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.267927  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.606034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.764905  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.766036  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:17.105448  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:17.604881  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.764247  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.107297  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.263113  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.607207  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.763761  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.767424  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:19.105348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.265466  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:19.606177  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.772514  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.107301  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.265082  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.606295  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.762817  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.769525  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:21.106007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:21.262749  283957 kapi.go:107] duration metric: took 1m45.504982271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:35:21.610332  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.123132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.606681  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.106303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.265347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:23.610785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.108937  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.604883  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.106603  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.266133  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:25.605612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.786474  283957 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.786506  283957 pod_ready.go:82] duration metric: took 1m8.527790413s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.786520  283957 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795290  283957 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.795318  283957 pod_ready.go:82] duration metric: took 8.78951ms for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795341  283957 pod_ready.go:39] duration metric: took 1m10.52768296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:35:25.795356  283957 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:35:25.795434  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:25.795702  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:25.886248  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:25.886322  283957 cri.go:89] found id: ""
	I0831 22:35:25.886358  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:25.886451  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.890246  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:25.890401  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:25.961145  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:25.961169  283957 cri.go:89] found id: ""
	I0831 22:35:25.961177  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:25.961232  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.971647  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:25.971720  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:26.081420  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.081442  283957 cri.go:89] found id: ""
	I0831 22:35:26.081450  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:26.081509  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.086692  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:26.086769  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:26.106149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.187973  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:26.187996  283957 cri.go:89] found id: ""
	I0831 22:35:26.188004  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:26.188061  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.192877  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:26.192951  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:26.297630  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.297653  283957 cri.go:89] found id: ""
	I0831 22:35:26.297662  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:26.297719  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.305863  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:26.305932  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:26.386494  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.386518  283957 cri.go:89] found id: ""
	I0831 22:35:26.386526  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:26.386596  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.391560  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:26.391632  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:26.446888  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.446911  283957 cri.go:89] found id: ""
	I0831 22:35:26.446919  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:26.446974  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.452924  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:26.452953  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.520818  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:26.520850  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.579607  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:26.579638  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.605871  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.676077  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:26.676186  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.772215  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:26.772299  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:26.885704  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:26.885743  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:26.971800  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972187  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972448  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.972661  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972903  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.973166  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.026028  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:27.026122  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:27.121170  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:27.306579  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:27.306611  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:27.381339  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:27.381381  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:27.432923  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:27.432958  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:27.505422  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:27.505456  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:27.523608  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:27.523691  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:27.594979  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595049  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:27.595118  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:27.595127  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595134  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595140  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595148  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595158  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.595169  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595175  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:27.606018  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.107291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.607326  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.107920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.605540  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:30.116852  283957 kapi.go:107] duration metric: took 1m54.016739242s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:35:30.119299  283957 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0831 22:35:30.123306  283957 addons.go:510] duration metric: took 2m1.799821522s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0831 22:35:37.595431  283957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:37.609347  283957 api_server.go:72] duration metric: took 2m9.286263895s to wait for apiserver process to appear ...
	I0831 22:35:37.609372  283957 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:35:37.609409  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:37.609464  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:37.653375  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:37.653399  283957 cri.go:89] found id: ""
	I0831 22:35:37.653408  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:37.653466  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.657014  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:37.657091  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:37.702049  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:37.702081  283957 cri.go:89] found id: ""
	I0831 22:35:37.702090  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:37.702148  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.705948  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:37.706022  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:37.743979  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.744002  283957 cri.go:89] found id: ""
	I0831 22:35:37.744010  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:37.744067  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.748167  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:37.748235  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:37.787366  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:37.787387  283957 cri.go:89] found id: ""
	I0831 22:35:37.787394  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:37.787456  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.791268  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:37.791418  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:37.839012  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:37.839032  283957 cri.go:89] found id: ""
	I0831 22:35:37.839040  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:37.839095  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.842773  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:37.842857  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:37.882906  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:37.882928  283957 cri.go:89] found id: ""
	I0831 22:35:37.882936  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:37.883016  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.886592  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:37.886701  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:37.929003  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:37.929026  283957 cri.go:89] found id: ""
	I0831 22:35:37.929034  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:37.929089  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.932647  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:37.932675  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.976634  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:37.976663  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:38.029768  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:38.029845  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:38.089134  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:38.089209  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:38.133397  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:38.133434  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:38.191973  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:38.192003  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:38.254593  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.254790  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255021  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255206  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255426  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255652  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.293315  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:38.293348  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:38.309324  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:38.309354  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:38.449465  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:38.449541  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:38.557894  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:38.557935  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:38.613020  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:38.613053  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:38.667543  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:38.667580  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:38.774202  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774279  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:38.774360  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:38.774399  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774433  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774476  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774510  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774544  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.774579  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774586  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:48.775832  283957 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:35:48.783566  283957 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:35:48.786206  283957 api_server.go:141] control plane version: v1.31.0
	I0831 22:35:48.786241  283957 api_server.go:131] duration metric: took 11.176861075s to wait for apiserver health ...
	I0831 22:35:48.786251  283957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:35:48.786273  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:48.786338  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:48.824896  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:48.824918  283957 cri.go:89] found id: ""
	I0831 22:35:48.824927  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:48.824984  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.828359  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:48.828472  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:48.869702  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:48.869727  283957 cri.go:89] found id: ""
	I0831 22:35:48.869735  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:48.869811  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.873344  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:48.873422  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:48.912098  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:48.912121  283957 cri.go:89] found id: ""
	I0831 22:35:48.912129  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:48.912185  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.915599  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:48.915669  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:48.958620  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:48.958644  283957 cri.go:89] found id: ""
	I0831 22:35:48.958653  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:48.958744  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.962169  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:48.962244  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:49.006023  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.006048  283957 cri.go:89] found id: ""
	I0831 22:35:49.006056  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:49.006118  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.011545  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:49.011654  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:49.054445  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.054469  283957 cri.go:89] found id: ""
	I0831 22:35:49.054478  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:49.054566  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.058214  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:49.058292  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:49.096178  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.096203  283957 cri.go:89] found id: ""
	I0831 22:35:49.096211  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:49.096265  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.099723  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:49.099762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.139017  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:49.139048  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.212561  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:49.212599  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.257845  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:49.257877  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:49.305619  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:49.305649  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:49.445076  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:49.445108  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:49.511728  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:49.511762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:49.559678  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:49.559715  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:49.600032  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:49.600066  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:49.699340  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:49.699382  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:49.762989  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763218  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763449  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.763640  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763860  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.764086  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.804313  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:49.804351  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:49.820979  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:49.821065  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:49.873854  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.873890  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:49.873974  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:49.873986  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874019  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874034  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874045  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874060  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.874067  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.874074  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:59.888880  283957 system_pods.go:59] 18 kube-system pods found
	I0831 22:35:59.888960  283957 system_pods.go:61] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.888985  283957 system_pods.go:61] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.889010  283957 system_pods.go:61] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.889033  283957 system_pods.go:61] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.889053  283957 system_pods.go:61] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.889074  283957 system_pods.go:61] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.889093  283957 system_pods.go:61] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.889115  283957 system_pods.go:61] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.889134  283957 system_pods.go:61] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.889154  283957 system_pods.go:61] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.889175  283957 system_pods.go:61] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.889195  283957 system_pods.go:61] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.889218  283957 system_pods.go:61] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.889238  283957 system_pods.go:61] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.889260  283957 system_pods.go:61] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.889280  283957 system_pods.go:61] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.889300  283957 system_pods.go:61] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.889321  283957 system_pods.go:61] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.889343  283957 system_pods.go:74] duration metric: took 11.103084876s to wait for pod list to return data ...
	I0831 22:35:59.889364  283957 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:35:59.892759  283957 default_sa.go:45] found service account: "default"
	I0831 22:35:59.892790  283957 default_sa.go:55] duration metric: took 3.404577ms for default service account to be created ...
	I0831 22:35:59.892801  283957 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:35:59.903086  283957 system_pods.go:86] 18 kube-system pods found
	I0831 22:35:59.903124  283957 system_pods.go:89] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.903134  283957 system_pods.go:89] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.903139  283957 system_pods.go:89] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.903143  283957 system_pods.go:89] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.903148  283957 system_pods.go:89] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.903152  283957 system_pods.go:89] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.903157  283957 system_pods.go:89] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.903162  283957 system_pods.go:89] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.903168  283957 system_pods.go:89] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.903173  283957 system_pods.go:89] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.903178  283957 system_pods.go:89] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.903182  283957 system_pods.go:89] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.903191  283957 system_pods.go:89] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.903195  283957 system_pods.go:89] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.903199  283957 system_pods.go:89] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.903208  283957 system_pods.go:89] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.903212  283957 system_pods.go:89] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.903225  283957 system_pods.go:89] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.903232  283957 system_pods.go:126] duration metric: took 10.425939ms to wait for k8s-apps to be running ...
	I0831 22:35:59.903240  283957 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:35:59.903305  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:59.914900  283957 system_svc.go:56] duration metric: took 11.64979ms WaitForService to wait for kubelet
	I0831 22:35:59.914930  283957 kubeadm.go:582] duration metric: took 2m31.591852103s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:35:59.914951  283957 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:35:59.918337  283957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 22:35:59.918373  283957 node_conditions.go:123] node cpu capacity is 2
	I0831 22:35:59.918383  283957 node_conditions.go:105] duration metric: took 3.427642ms to run NodePressure ...
	I0831 22:35:59.918397  283957 start.go:241] waiting for startup goroutines ...
	I0831 22:35:59.918404  283957 start.go:246] waiting for cluster config update ...
	I0831 22:35:59.918419  283957 start.go:255] writing updated cluster config ...
	I0831 22:35:59.918717  283957 ssh_runner.go:195] Run: rm -f paused
	I0831 22:36:00.538015  283957 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:36:00.544227  283957 out.go:177] * Done! kubectl is now configured to use "addons-926553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.627613951Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0082d502-6bfe-4447-8aa0-b7d032326fd8 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.628512266Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-9xzr8/hello-world-app" id=61e51550-39d8-4da7-8782-24703151746a name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.628598034Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.651722179Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eb705b7d6cffec70fefd47b6ec693c3191055fad4a7cc73f73af11c6af172038/merged/etc/passwd: no such file or directory"
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.651895560Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb705b7d6cffec70fefd47b6ec693c3191055fad4a7cc73f73af11c6af172038/merged/etc/group: no such file or directory"
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.706359314Z" level=info msg="Created container 2f63a683509dfa3e67a56f5ea134db0ebf584bb626b624e43743fa29e65c23fc: default/hello-world-app-55bf9c44b4-9xzr8/hello-world-app" id=61e51550-39d8-4da7-8782-24703151746a name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.707113984Z" level=info msg="Starting container: 2f63a683509dfa3e67a56f5ea134db0ebf584bb626b624e43743fa29e65c23fc" id=3f1dcd71-0f2e-4fb8-a8bd-a7542a42c948 name=/runtime.v1.RuntimeService/StartContainer
	Aug 31 22:48:08 addons-926553 crio[969]: time="2024-08-31 22:48:08.726034919Z" level=info msg="Started container" PID=8819 containerID=2f63a683509dfa3e67a56f5ea134db0ebf584bb626b624e43743fa29e65c23fc description=default/hello-world-app-55bf9c44b4-9xzr8/hello-world-app id=3f1dcd71-0f2e-4fb8-a8bd-a7542a42c948 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f204122dcaee6195bc91f27c67e78101c2d61a60d302aa787f91451d1f428cf
	Aug 31 22:48:10 addons-926553 crio[969]: time="2024-08-31 22:48:10.161934844Z" level=info msg="Stopping container: e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e (timeout: 2s)" id=fae5ea10-87be-40f3-b88f-9523f602be64 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:48:11 addons-926553 crio[969]: time="2024-08-31 22:48:11.050701484Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d12bc634-312e-4ce9-b1c3-e0d617069aa4 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:11 addons-926553 crio[969]: time="2024-08-31 22:48:11.050947185Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d12bc634-312e-4ce9-b1c3-e0d617069aa4 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.168673637Z" level=warning msg="Stopping container e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=fae5ea10-87be-40f3-b88f-9523f602be64 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:48:12 addons-926553 conmon[4755]: conmon e9081eefaa13418455ad <ninfo>: container 4766 exited with status 137
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.305020770Z" level=info msg="Stopped container e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e: ingress-nginx/ingress-nginx-controller-bc57996ff-xrqz9/controller" id=fae5ea10-87be-40f3-b88f-9523f602be64 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.305532397Z" level=info msg="Stopping pod sandbox: 97f5fa31394ebd44a561ea1d04fac15f02670f82c357560511bc65e0b9ff52dd" id=b93b4d32-ac67-4c4a-a241-2a8ecef96e4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.308944277Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ZD7CWHOQU7LQSU4P - [0:0]\n:KUBE-HP-23XDI3UQKEK5ACU3 - [0:0]\n-X KUBE-HP-23XDI3UQKEK5ACU3\n-X KUBE-HP-ZD7CWHOQU7LQSU4P\nCOMMIT\n"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.310335594Z" level=info msg="Closing host port tcp:80"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.310383774Z" level=info msg="Closing host port tcp:443"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.311648183Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.311674899Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.311851932Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-xrqz9 Namespace:ingress-nginx ID:97f5fa31394ebd44a561ea1d04fac15f02670f82c357560511bc65e0b9ff52dd UID:30ac112c-2cb9-44df-8b86-e9a9804b4efa NetNS:/var/run/netns/ca68a89e-56c3-45fe-89ce-5d7c4cf7a3d5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.311989596Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-xrqz9 from CNI network \"kindnet\" (type=ptp)"
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.326043975Z" level=info msg="Stopped pod sandbox: 97f5fa31394ebd44a561ea1d04fac15f02670f82c357560511bc65e0b9ff52dd" id=b93b4d32-ac67-4c4a-a241-2a8ecef96e4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.365371809Z" level=info msg="Removing container: e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e" id=3a4d4f3f-7063-4627-8494-4673b41e558f name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 31 22:48:12 addons-926553 crio[969]: time="2024-08-31 22:48:12.384309565Z" level=info msg="Removed container e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e: ingress-nginx/ingress-nginx-controller-bc57996ff-xrqz9/controller" id=3a4d4f3f-7063-4627-8494-4673b41e558f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f63a683509df       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   3f204122dcaee       hello-world-app-55bf9c44b4-9xzr8
	760b8772821dc       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                              2 minutes ago       Running             nginx                     0                   616434f938bd3       nginx
	5102df2042c27       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 13 minutes ago      Running             gcp-auth                  0                   c6ce5424649e0       gcp-auth-89d5ffd79-ntcjg
	08c755cbd5fe4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago      Running             local-path-provisioner    0                   57dab6b5f6051       local-path-provisioner-86d989889c-5d9bc
	e279b35e3726f       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago      Exited              patch                     2                   44fcf4b002cf9       ingress-nginx-admission-patch-qsgmg
	bcf5108769347       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago      Exited              create                    0                   7aab5d7eec9ef       ingress-nginx-admission-create-pxdjc
	1512f4dc6befd       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago      Running             metrics-server            0                   9ffbb41ccd3eb       metrics-server-84c5f94fbc-zwvsl
	d4a4a18a5a7f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago      Running             storage-provisioner       0                   37a8c2f557cde       storage-provisioner
	c0854dd1abcf9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             14 minutes ago      Running             coredns                   0                   c565a0f2f52b8       coredns-6f6b679f8f-sljbt
	7cc064acda755       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                           14 minutes ago      Running             kindnet-cni               0                   ba7fb4cc6f892       kindnet-wdlp4
	38638055bfba9       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                             14 minutes ago      Running             kube-proxy                0                   2faf839d32f54       kube-proxy-2x2mt
	cc59354075cb7       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                             15 minutes ago      Running             kube-controller-manager   0                   9d98609f879af       kube-controller-manager-addons-926553
	a2ceaab8a5e1b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             15 minutes ago      Running             etcd                      0                   003527351e2b0       etcd-addons-926553
	29388d95df021       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                             15 minutes ago      Running             kube-scheduler            0                   58f6b662812e6       kube-scheduler-addons-926553
	4f3de6a88ca04       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                             15 minutes ago      Running             kube-apiserver            0                   fec228035ae32       kube-apiserver-addons-926553
	
	
	==> coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] <==
	[INFO] 10.244.0.14:47403 - 18828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107133s
	[INFO] 10.244.0.14:60100 - 56608 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008414849s
	[INFO] 10.244.0.14:60100 - 41517 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009713783s
	[INFO] 10.244.0.14:38062 - 19984 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167759s
	[INFO] 10.244.0.14:38062 - 61468 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152022s
	[INFO] 10.244.0.14:56768 - 49550 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107535s
	[INFO] 10.244.0.14:56768 - 25522 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038949s
	[INFO] 10.244.0.14:36032 - 41173 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087826s
	[INFO] 10.244.0.14:36032 - 21969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059166s
	[INFO] 10.244.0.14:57338 - 29619 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121532s
	[INFO] 10.244.0.14:57338 - 61873 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038046s
	[INFO] 10.244.0.14:56027 - 58740 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002244404s
	[INFO] 10.244.0.14:56027 - 1643 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002177787s
	[INFO] 10.244.0.14:36047 - 49336 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069111s
	[INFO] 10.244.0.14:36047 - 12732 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192186s
	[INFO] 10.244.0.19:60080 - 19976 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207808s
	[INFO] 10.244.0.19:44795 - 23051 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118792s
	[INFO] 10.244.0.19:45334 - 37804 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198151s
	[INFO] 10.244.0.19:49736 - 43423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110488s
	[INFO] 10.244.0.19:60561 - 60650 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127867s
	[INFO] 10.244.0.19:55452 - 41864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097204s
	[INFO] 10.244.0.19:54221 - 39065 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002445188s
	[INFO] 10.244.0.19:53320 - 41026 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00209399s
	[INFO] 10.244.0.19:57162 - 45093 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001822174s
	[INFO] 10.244.0.19:34360 - 14218 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002709595s
	
	
	==> describe nodes <==
	Name:               addons-926553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-926553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-926553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-926553
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:33:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-926553
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:48:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:45:59 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:45:59 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:45:59 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:45:59 +0000   Sat, 31 Aug 2024 22:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-926553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c4652ff78a412da204ff6653859615
	  System UUID:                a9959b90-2ddc-4599-b12a-adb3653f0cc6
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-9xzr8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-ntcjg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-sljbt                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-926553                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-wdlp4                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-926553               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-926553      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-2x2mt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-926553               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-zwvsl            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-5d9bc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node addons-926553 event: Registered Node addons-926553 in Controller
	  Normal   NodeReady                14m                kubelet          Node addons-926553 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug31 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014722] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471263] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.854339] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621095] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 21:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug31 21:36] hrtimer: interrupt took 85633258 ns
	
	
	==> etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] <==
	{"level":"warn","ts":"2024-08-31T22:33:33.524119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.411585ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4109"}
	{"level":"info","ts":"2024-08-31T22:33:33.524263Z","caller":"traceutil/trace.go:171","msg":"trace[1066283787] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:417; }","duration":"278.562189ms","start":"2024-08-31T22:33:33.245688Z","end":"2024-08-31T22:33:33.524250Z","steps":["trace[1066283787] 'agreement among raft nodes before linearized reading'  (duration: 278.328041ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.524725Z","caller":"traceutil/trace.go:171","msg":"trace[1998513680] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"129.020394ms","start":"2024-08-31T22:33:33.395694Z","end":"2024-08-31T22:33:33.524714Z","steps":["trace[1998513680] 'process raft request'  (duration: 107.387194ms)","trace[1998513680] 'compare'  (duration: 20.622685ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-31T22:33:33.530997Z","caller":"traceutil/trace.go:171","msg":"trace[234719321] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"135.133009ms","start":"2024-08-31T22:33:33.395850Z","end":"2024-08-31T22:33:33.530983Z","steps":["trace[234719321] 'process raft request'  (duration: 127.947447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.575244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:33:33.566193Z","caller":"traceutil/trace.go:171","msg":"trace[595826488] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:417; }","duration":"306.912441ms","start":"2024-08-31T22:33:33.259250Z","end":"2024-08-31T22:33:33.566162Z","steps":["trace[595826488] 'agreement among raft nodes before linearized reading'  (duration: 272.568401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.573983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259231Z","time spent":"314.696303ms","remote":"127.0.0.1:50728","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T22:33:33.531910Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.516901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3145"}
	{"level":"info","ts":"2024-08-31T22:33:33.577122Z","caller":"traceutil/trace.go:171","msg":"trace[1467882144] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:417; }","duration":"164.71578ms","start":"2024-08-31T22:33:33.412390Z","end":"2024-08-31T22:33:33.577105Z","steps":["trace[1467882144] 'agreement among raft nodes before linearized reading'  (duration: 119.483424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.582255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.333652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.597156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.932034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.532029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.89155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2024-08-31T22:33:33.577617Z","caller":"traceutil/trace.go:171","msg":"trace[133143656] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:417; }","duration":"165.263282ms","start":"2024-08-31T22:33:33.412344Z","end":"2024-08-31T22:33:33.577607Z","steps":["trace[133143656] 'agreement among raft nodes before linearized reading'  (duration: 119.576076ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577692Z","caller":"traceutil/trace.go:171","msg":"trace[701626801] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:417; }","duration":"182.073297ms","start":"2024-08-31T22:33:33.395612Z","end":"2024-08-31T22:33:33.577685Z","steps":["trace[701626801] 'agreement among raft nodes before linearized reading'  (duration: 136.325628ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577710Z","caller":"traceutil/trace.go:171","msg":"trace[617058299] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:417; }","duration":"317.35042ms","start":"2024-08-31T22:33:33.260355Z","end":"2024-08-31T22:33:33.577705Z","steps":["trace[617058299] 'agreement among raft nodes before linearized reading'  (duration: 271.603752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.609326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.260279Z","time spent":"349.011862ms","remote":"127.0.0.1:50662","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas\" limit:1 "}
	{"level":"info","ts":"2024-08-31T22:33:33.577966Z","caller":"traceutil/trace.go:171","msg":"trace[1867680583] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:417; }","duration":"318.901298ms","start":"2024-08-31T22:33:33.259056Z","end":"2024-08-31T22:33:33.577957Z","steps":["trace[1867680583] 'agreement among raft nodes before linearized reading'  (duration: 272.92616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.610229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259019Z","time spent":"351.194926ms","remote":"127.0.0.1:50942","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" "}
	{"level":"info","ts":"2024-08-31T22:33:33.577987Z","caller":"traceutil/trace.go:171","msg":"trace[1222259269] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:417; }","duration":"318.848532ms","start":"2024-08-31T22:33:33.259134Z","end":"2024-08-31T22:33:33.577983Z","steps":["trace[1222259269] 'agreement among raft nodes before linearized reading'  (duration: 272.866747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.614870Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259122Z","time spent":"355.723597ms","remote":"127.0.0.1:51030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":3375,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"info","ts":"2024-08-31T22:43:18.076737Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1540}
	{"level":"info","ts":"2024-08-31T22:43:18.119550Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1540,"took":"42.327277ms","hash":1500695898,"current-db-size-bytes":6250496,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3358720,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-08-31T22:43:18.119615Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1500695898,"revision":1540,"compact-revision":-1}
	
	
	==> gcp-auth [5102df2042c274c3bdda768e34fef45be4cf3338060a3b3ca18b308ef802a5b7] <==
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:36:01 Ready to marshal response ...
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:44:06 Ready to marshal response ...
	2024/08/31 22:44:06 Ready to write response ...
	2024/08/31 22:44:14 Ready to marshal response ...
	2024/08/31 22:44:14 Ready to write response ...
	2024/08/31 22:44:27 Ready to marshal response ...
	2024/08/31 22:44:27 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:10 Ready to marshal response ...
	2024/08/31 22:45:10 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:48 Ready to marshal response ...
	2024/08/31 22:45:48 Ready to write response ...
	2024/08/31 22:48:07 Ready to marshal response ...
	2024/08/31 22:48:07 Ready to write response ...
	
	
	==> kernel <==
	 22:48:17 up  2:30,  0 users,  load average: 0.35, 0.50, 1.29
	Linux addons-926553 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] <==
	I0831 22:46:14.649610       1 main.go:299] handling current node
	I0831 22:46:24.649582       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:46:24.649731       1 main.go:299] handling current node
	I0831 22:46:34.649782       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:46:34.649816       1 main.go:299] handling current node
	I0831 22:46:44.653812       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:46:44.653845       1 main.go:299] handling current node
	I0831 22:46:54.652124       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:46:54.652166       1 main.go:299] handling current node
	I0831 22:47:04.649785       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:04.649908       1 main.go:299] handling current node
	I0831 22:47:14.657398       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:14.657439       1 main.go:299] handling current node
	I0831 22:47:24.650504       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:24.650649       1 main.go:299] handling current node
	I0831 22:47:34.650536       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:34.650570       1 main.go:299] handling current node
	I0831 22:47:44.652483       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:44.652517       1 main.go:299] handling current node
	I0831 22:47:54.651570       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:47:54.651699       1 main.go:299] handling current node
	I0831 22:48:04.656494       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:48:04.656525       1 main.go:299] handling current node
	I0831 22:48:14.649556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:48:14.649587       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] <==
	E0831 22:35:25.357143       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0831 22:35:25.405475       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0831 22:44:19.100699       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:44:43.651018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.651160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.684644       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.684781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.702517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.702581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.707833       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.707886       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.743226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.743276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:44:44.708480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:44:44.744290       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0831 22:44:44.836280       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0831 22:45:18.780004       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.178.150"}
	I0831 22:45:42.410885       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:45:43.451889       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:45:47.980738       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:45:48.341429       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.56.133"}
	I0831 22:48:07.301950       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.239.134"}
	
	
	==> kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] <==
	W0831 22:46:53.090932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:46:53.090974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:47:21.754196       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:47:21.754243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:47:24.418632       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:47:24.418756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:47:27.344246       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:47:27.344288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:47:38.767771       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:47:38.767814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:47:55.389469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:47:55.389516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:48:02.971611       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:48:02.971659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:48:07.066200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.62313ms"
	I0831 22:48:07.077718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.385393ms"
	I0831 22:48:07.077819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="60.939µs"
	I0831 22:48:07.087907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="128.606µs"
	I0831 22:48:09.126460       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0831 22:48:09.133874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.903µs"
	I0831 22:48:09.136462       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0831 22:48:09.381511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.94501ms"
	I0831 22:48:09.381686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.261µs"
	W0831 22:48:11.811756       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:48:11.811800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] <==
	I0831 22:33:33.909772       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:33:34.876166       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:33:34.876653       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:33:35.043499       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:33:35.050030       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:33:35.104068       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:33:35.104588       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:33:35.104890       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:33:35.106274       1 config.go:197] "Starting service config controller"
	I0831 22:33:35.106395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:33:35.106464       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:33:35.106494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:33:35.107280       1 config.go:326] "Starting node config controller"
	I0831 22:33:35.107354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:33:35.222348       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:33:35.222470       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:33:35.222534       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] <==
	W0831 22:33:20.578962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:20.578977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:33:20.579037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:20.579097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:33:20.579211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:33:20.579296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:20.579353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.584824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:33:20.584869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.398071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:21.398208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.413716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.413827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.497136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:21.497258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.589583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.589719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.860482       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:33:21.860528       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0831 22:33:24.865187       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:48:07 addons-926553 kubelet[1497]: I0831 22:48:07.121144    1497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fa2d027b-f4a3-4488-af28-fabae2a77f6e-gcp-creds\") pod \"hello-world-app-55bf9c44b4-9xzr8\" (UID: \"fa2d027b-f4a3-4488-af28-fabae2a77f6e\") " pod="default/hello-world-app-55bf9c44b4-9xzr8"
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.328023    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjpnx\" (UniqueName: \"kubernetes.io/projected/0e07561b-af16-4df3-8e88-438e733a8930-kube-api-access-jjpnx\") pod \"0e07561b-af16-4df3-8e88-438e733a8930\" (UID: \"0e07561b-af16-4df3-8e88-438e733a8930\") "
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.332594    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e07561b-af16-4df3-8e88-438e733a8930-kube-api-access-jjpnx" (OuterVolumeSpecName: "kube-api-access-jjpnx") pod "0e07561b-af16-4df3-8e88-438e733a8930" (UID: "0e07561b-af16-4df3-8e88-438e733a8930"). InnerVolumeSpecName "kube-api-access-jjpnx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.345345    1497 scope.go:117] "RemoveContainer" containerID="eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d"
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.371110    1497 scope.go:117] "RemoveContainer" containerID="eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d"
	Aug 31 22:48:08 addons-926553 kubelet[1497]: E0831 22:48:08.371508    1497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d\": container with ID starting with eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d not found: ID does not exist" containerID="eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d"
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.371550    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d"} err="failed to get container status \"eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d\": rpc error: code = NotFound desc = could not find container \"eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d\": container with ID starting with eb94fa29e1d5ab369c367099c1b48d8008010237da003f81174f145e75c62d4d not found: ID does not exist"
	Aug 31 22:48:08 addons-926553 kubelet[1497]: I0831 22:48:08.431992    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jjpnx\" (UniqueName: \"kubernetes.io/projected/0e07561b-af16-4df3-8e88-438e733a8930-kube-api-access-jjpnx\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:48:09 addons-926553 kubelet[1497]: I0831 22:48:09.051967    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e07561b-af16-4df3-8e88-438e733a8930" path="/var/lib/kubelet/pods/0e07561b-af16-4df3-8e88-438e733a8930/volumes"
	Aug 31 22:48:11 addons-926553 kubelet[1497]: E0831 22:48:11.051289    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5722af42-82b3-4bf5-a07f-92ee5dd87a84"
	Aug 31 22:48:11 addons-926553 kubelet[1497]: I0831 22:48:11.052058    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="186cd98d-03a5-48f5-b33a-08bde545bc25" path="/var/lib/kubelet/pods/186cd98d-03a5-48f5-b33a-08bde545bc25/volumes"
	Aug 31 22:48:11 addons-926553 kubelet[1497]: I0831 22:48:11.052521    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7554b8-acff-44ff-bc5a-8a5ff53da325" path="/var/lib/kubelet/pods/ef7554b8-acff-44ff-bc5a-8a5ff53da325/volumes"
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.360106    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h6jf\" (UniqueName: \"kubernetes.io/projected/30ac112c-2cb9-44df-8b86-e9a9804b4efa-kube-api-access-8h6jf\") pod \"30ac112c-2cb9-44df-8b86-e9a9804b4efa\" (UID: \"30ac112c-2cb9-44df-8b86-e9a9804b4efa\") "
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.360169    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30ac112c-2cb9-44df-8b86-e9a9804b4efa-webhook-cert\") pod \"30ac112c-2cb9-44df-8b86-e9a9804b4efa\" (UID: \"30ac112c-2cb9-44df-8b86-e9a9804b4efa\") "
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.363859    1497 scope.go:117] "RemoveContainer" containerID="e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e"
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.366220    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ac112c-2cb9-44df-8b86-e9a9804b4efa-kube-api-access-8h6jf" (OuterVolumeSpecName: "kube-api-access-8h6jf") pod "30ac112c-2cb9-44df-8b86-e9a9804b4efa" (UID: "30ac112c-2cb9-44df-8b86-e9a9804b4efa"). InnerVolumeSpecName "kube-api-access-8h6jf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.370425    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ac112c-2cb9-44df-8b86-e9a9804b4efa-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "30ac112c-2cb9-44df-8b86-e9a9804b4efa" (UID: "30ac112c-2cb9-44df-8b86-e9a9804b4efa"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.384721    1497 scope.go:117] "RemoveContainer" containerID="e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e"
	Aug 31 22:48:12 addons-926553 kubelet[1497]: E0831 22:48:12.385103    1497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e\": container with ID starting with e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e not found: ID does not exist" containerID="e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e"
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.385144    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e"} err="failed to get container status \"e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e\": rpc error: code = NotFound desc = could not find container \"e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e\": container with ID starting with e9081eefaa13418455ad62e9f887f29f53a426393b91a55ee6ee756ce9668b6e not found: ID does not exist"
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.461075    1497 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30ac112c-2cb9-44df-8b86-e9a9804b4efa-webhook-cert\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:48:12 addons-926553 kubelet[1497]: I0831 22:48:12.461119    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8h6jf\" (UniqueName: \"kubernetes.io/projected/30ac112c-2cb9-44df-8b86-e9a9804b4efa-kube-api-access-8h6jf\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:48:13 addons-926553 kubelet[1497]: I0831 22:48:13.052303    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ac112c-2cb9-44df-8b86-e9a9804b4efa" path="/var/lib/kubelet/pods/30ac112c-2cb9-44df-8b86-e9a9804b4efa/volumes"
	Aug 31 22:48:13 addons-926553 kubelet[1497]: E0831 22:48:13.336136    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144493335813747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:48:13 addons-926553 kubelet[1497]: E0831 22:48:13.336173    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144493335813747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [d4a4a18a5a7f6d6b98241bc922d29ac28c4b9779e5a615453b66ea70509523e8] <==
	I0831 22:34:15.733314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:34:15.907321       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:34:15.907562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:34:16.042095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:34:16.048020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	I0831 22:34:16.060065       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1090045-d7c1-4b36-83f3-943893f1aa8d", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0 became leader
	I0831 22:34:16.149026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-926553 -n addons-926553
helpers_test.go:262: (dbg) Run:  kubectl --context addons-926553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-926553 describe pod busybox
helpers_test.go:283: (dbg) kubectl --context addons-926553 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-926553/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:36:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-npklh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-npklh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-926553
	  Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 12m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m6s (x41 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (306.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 10.83822ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003450565s
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (96.765677ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 11m57.435934377s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (92.094266ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 12m0.979597102s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (85.010143ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 12m6.141227841s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (102.267385ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 12m14.606484028s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (97.476095ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 12m25.890285751s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (99.553118ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 12m42.954682095s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (86.62241ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 13m6.154404546s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (92.182075ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 13m49.540643306s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (98.622171ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 14m16.550969249s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (85.933033ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 15m16.887091658s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (83.213811ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 15m50.370664402s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (87.100638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 16m23.268940291s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-926553 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-926553 top pods -n kube-system: exit status 1 (86.927307ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-sljbt, age: 16m54.742988161s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-926553
helpers_test.go:236: (dbg) docker inspect addons-926553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a",
	        "Created": "2024-08-31T22:32:58.142499264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:32:58.286853851Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/hosts",
	        "LogPath": "/var/lib/docker/containers/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a/2b41c4e07f7abc3eee3ba56e7ee5b2b22c7ae1259d49e9bd6c1a695e687c691a-json.log",
	        "Name": "/addons-926553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-926553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-926553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424-init/diff:/var/lib/docker/overlay2/b65bd3df822a42b081e949f262147909a06a528615f1ebee5ca341285d3e7159/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ee03926f17b7c804764f694abd55e3fb29259d457363383da7117854abec424/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-926553",
	                "Source": "/var/lib/docker/volumes/addons-926553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-926553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-926553",
	                "name.minikube.sigs.k8s.io": "addons-926553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "299f7cd903653354b274e148f6cb6a39ed6942891df3e3272bc94377e3fd800f",
	            "SandboxKey": "/var/run/docker/netns/299f7cd90365",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-926553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "7a8828e69332b37e7bad00ea7f7da101018d986bdcdd9608e22ba654914df386",
	                    "EndpointID": "f81499bc432f0db4a48aaa2f7a33d2bce9def00a9f596d90ba418160f18b3dd7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-926553",
	                        "2b41c4e07f7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-926553 -n addons-926553
helpers_test.go:245: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 logs -n 25: (1.577102897s)
helpers_test.go:253: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-030884                                                                     | download-only-030884   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | download-docker-718632                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-718632                                                                   | download-docker-718632 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | binary-mirror-123480                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44745                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-123480                                                                     | binary-mirror-123480   | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-926553 --wait=true                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:36 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:44 UTC | 31 Aug 24 22:44 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | -p addons-926553                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-926553 ssh cat                                                                       | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | /opt/local-path-provisioner/pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-926553 ip                                                                            | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | -p addons-926553                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC | 31 Aug 24 22:45 UTC |
	|         | addons-926553                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-926553 ssh curl -s                                                                   | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-926553 ip                                                                            | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-926553 addons disable                                                                | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:48 UTC | 31 Aug 24 22:48 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-926553 addons                                                                        | addons-926553          | jenkins | v1.33.1 | 31 Aug 24 22:50 UTC | 31 Aug 24 22:50 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:32:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:32:33.055573  283957 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:33.055738  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.055749  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:33.055754  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:33.056034  283957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:32:33.056594  283957 out.go:352] Setting JSON to false
	I0831 22:32:33.057655  283957 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8101,"bootTime":1725135452,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:32:33.057748  283957 start.go:139] virtualization:  
	I0831 22:32:33.061311  283957 out.go:177] * [addons-926553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:32:33.065254  283957 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:32:33.065416  283957 notify.go:220] Checking for updates...
	I0831 22:32:33.070822  283957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:32:33.074065  283957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:32:33.076774  283957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:32:33.079454  283957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:32:33.082232  283957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:32:33.085445  283957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:32:33.116782  283957 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:32:33.116914  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.173707  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.16402705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.173832  283957 docker.go:307] overlay module found
	I0831 22:32:33.176642  283957 out.go:177] * Using the docker driver based on user configuration
	I0831 22:32:33.179170  283957 start.go:297] selected driver: docker
	I0831 22:32:33.179214  283957 start.go:901] validating driver "docker" against <nil>
	I0831 22:32:33.179232  283957 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:32:33.179877  283957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:33.244492  283957 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:32:33.235116551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:33.244664  283957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:32:33.244891  283957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:32:33.247588  283957 out.go:177] * Using Docker driver with root privileges
	I0831 22:32:33.250073  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:32:33.250100  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:32:33.250112  283957 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:32:33.250206  283957 start.go:340] cluster config:
	{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:32:33.253061  283957 out.go:177] * Starting "addons-926553" primary control-plane node in "addons-926553" cluster
	I0831 22:32:33.255456  283957 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 22:32:33.258049  283957 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:32:33.260597  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:33.260655  283957 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:33.260667  283957 cache.go:56] Caching tarball of preloaded images
	I0831 22:32:33.260691  283957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:32:33.260749  283957 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 22:32:33.260760  283957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 22:32:33.261148  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:32:33.261182  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json: {Name:mkdfcbbb034ebf13d0c934d3b8bb6283f2353c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:32:33.276646  283957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:33.276792  283957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:32:33.276818  283957 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:32:33.276823  283957 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:32:33.276832  283957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:32:33.276842  283957 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:32:50.926792  283957 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:32:50.926833  283957 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:32:50.926891  283957 start.go:360] acquireMachinesLock for addons-926553: {Name:mk45b5d2bdf6c02f40299229aa5af77faafa98b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:32:50.927022  283957 start.go:364] duration metric: took 106.732µs to acquireMachinesLock for "addons-926553"
	I0831 22:32:50.927053  283957 start.go:93] Provisioning new machine with config: &{Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:32:50.927149  283957 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:32:50.929291  283957 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:32:50.929542  283957 start.go:159] libmachine.API.Create for "addons-926553" (driver="docker")
	I0831 22:32:50.929577  283957 client.go:168] LocalClient.Create starting
	I0831 22:32:50.929688  283957 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem
	I0831 22:32:51.568232  283957 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem
	I0831 22:32:51.959805  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:32:51.976476  283957 cli_runner.go:211] docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:32:51.976564  283957 network_create.go:284] running [docker network inspect addons-926553] to gather additional debugging logs...
	I0831 22:32:51.976587  283957 cli_runner.go:164] Run: docker network inspect addons-926553
	W0831 22:32:51.998246  283957 cli_runner.go:211] docker network inspect addons-926553 returned with exit code 1
	I0831 22:32:51.998286  283957 network_create.go:287] error running [docker network inspect addons-926553]: docker network inspect addons-926553: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-926553 not found
	I0831 22:32:51.998301  283957 network_create.go:289] output of [docker network inspect addons-926553]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-926553 not found
	
	** /stderr **
	I0831 22:32:51.998418  283957 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:32:52.020066  283957 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017aa870}
	I0831 22:32:52.020113  283957 network_create.go:124] attempt to create docker network addons-926553 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:32:52.020180  283957 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-926553 addons-926553
	I0831 22:32:52.103358  283957 network_create.go:108] docker network addons-926553 192.168.49.0/24 created
	I0831 22:32:52.103398  283957 kic.go:121] calculated static IP "192.168.49.2" for the "addons-926553" container
	I0831 22:32:52.103481  283957 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:32:52.117925  283957 cli_runner.go:164] Run: docker volume create addons-926553 --label name.minikube.sigs.k8s.io=addons-926553 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:32:52.134920  283957 oci.go:103] Successfully created a docker volume addons-926553
	I0831 22:32:52.135011  283957 cli_runner.go:164] Run: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:32:53.917914  283957 cli_runner.go:217] Completed: docker run --rm --name addons-926553-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --entrypoint /usr/bin/test -v addons-926553:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (1.78286744s)
	I0831 22:32:53.917946  283957 oci.go:107] Successfully prepared a docker volume addons-926553
	I0831 22:32:53.917968  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:53.917988  283957 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:32:53.918085  283957 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:32:58.069694  283957 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-926553:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.151551571s)
	I0831 22:32:58.069731  283957 kic.go:203] duration metric: took 4.15173909s to extract preloaded images to volume ...
	W0831 22:32:58.069874  283957 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:32:58.069992  283957 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:32:58.127293  283957 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-926553 --name addons-926553 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-926553 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-926553 --network addons-926553 --ip 192.168.49.2 --volume addons-926553:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:32:58.451756  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Running}}
	I0831 22:32:58.471081  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.493141  283957 cli_runner.go:164] Run: docker exec addons-926553 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:32:58.579570  283957 oci.go:144] the created container "addons-926553" has a running status.
	I0831 22:32:58.579597  283957 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa...
	I0831 22:32:58.856139  283957 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:32:58.888353  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:58.918856  283957 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:32:58.918881  283957 kic_runner.go:114] Args: [docker exec --privileged addons-926553 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:32:58.994745  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:32:59.020659  283957 machine.go:93] provisionDockerMachine start ...
	I0831 22:32:59.020755  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:32:59.042776  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:32:59.043049  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:32:59.043065  283957 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:32:59.043777  283957 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 22:33:02.183965  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.183992  283957 ubuntu.go:169] provisioning hostname "addons-926553"
	I0831 22:33:02.184057  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.201134  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.201387  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.201404  283957 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-926553 && echo "addons-926553" | sudo tee /etc/hostname
	I0831 22:33:02.349789  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-926553
	
	I0831 22:33:02.349888  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.372048  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:02.372306  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:02.372323  283957 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-926553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-926553/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-926553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:33:02.504705  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:33:02.504736  283957 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 22:33:02.504768  283957 ubuntu.go:177] setting up certificates
	I0831 22:33:02.504779  283957 provision.go:84] configureAuth start
	I0831 22:33:02.504849  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:02.523280  283957 provision.go:143] copyHostCerts
	I0831 22:33:02.523372  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 22:33:02.523504  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 22:33:02.523567  283957 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 22:33:02.523620  283957 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.addons-926553 san=[127.0.0.1 192.168.49.2 addons-926553 localhost minikube]
	I0831 22:33:02.933713  283957 provision.go:177] copyRemoteCerts
	I0831 22:33:02.933792  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:33:02.933842  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:02.950418  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.053745  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 22:33:03.085010  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:33:03.111911  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:33:03.138695  283957 provision.go:87] duration metric: took 633.893833ms to configureAuth
	I0831 22:33:03.138724  283957 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:33:03.138976  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:03.139098  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.157231  283957 main.go:141] libmachine: Using SSH client type: native
	I0831 22:33:03.157489  283957 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0831 22:33:03.157510  283957 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 22:33:03.395474  283957 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 22:33:03.395500  283957 machine.go:96] duration metric: took 4.374820866s to provisionDockerMachine
	I0831 22:33:03.395511  283957 client.go:171] duration metric: took 12.46592371s to LocalClient.Create
	I0831 22:33:03.395523  283957 start.go:167] duration metric: took 12.465982753s to libmachine.API.Create "addons-926553"
	I0831 22:33:03.395532  283957 start.go:293] postStartSetup for "addons-926553" (driver="docker")
	I0831 22:33:03.395543  283957 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:33:03.395618  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:33:03.395665  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.414120  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.513743  283957 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:33:03.517073  283957 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:33:03.517108  283957 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:33:03.517137  283957 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:33:03.517155  283957 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:33:03.517165  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 22:33:03.517246  283957 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 22:33:03.517272  283957 start.go:296] duration metric: took 121.734053ms for postStartSetup
	I0831 22:33:03.517586  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.539317  283957 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/config.json ...
	I0831 22:33:03.539619  283957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:33:03.539672  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.556680  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.650277  283957 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:33:03.654747  283957 start.go:128] duration metric: took 12.727579827s to createHost
	I0831 22:33:03.654772  283957 start.go:83] releasing machines lock for "addons-926553", held for 12.727737422s
	I0831 22:33:03.654860  283957 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-926553")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-926553
	I0831 22:33:03.672628  283957 ssh_runner.go:195] Run: cat /version.json
	I0831 22:33:03.672710  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.673358  283957 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:33:03.673442  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:03.697266  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.710029  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:03.795932  283957 ssh_runner.go:195] Run: systemctl --version
	I0831 22:33:03.930195  283957 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 22:33:04.071340  283957 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:33:04.075814  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.099545  283957 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:33:04.099629  283957 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:33:04.136429  283957 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:33:04.136452  283957 start.go:495] detecting cgroup driver to use...
	I0831 22:33:04.136490  283957 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:33:04.136563  283957 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 22:33:04.152782  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 22:33:04.164726  283957 docker.go:217] disabling cri-docker service (if available) ...
	I0831 22:33:04.164790  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 22:33:04.179068  283957 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 22:33:04.193725  283957 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 22:33:04.288369  283957 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 22:33:04.384337  283957 docker.go:233] disabling docker service ...
	I0831 22:33:04.384478  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 22:33:04.405127  283957 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 22:33:04.417339  283957 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 22:33:04.502240  283957 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 22:33:04.591263  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 22:33:04.604121  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:33:04.621501  283957 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 22:33:04.621615  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.632529  283957 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 22:33:04.632622  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.642518  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.652512  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.663605  283957 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:33:04.672528  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.682613  283957 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.698852  283957 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 22:33:04.708709  283957 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:33:04.716981  283957 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:33:04.725394  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:04.831046  283957 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 22:33:04.953766  283957 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 22:33:04.953873  283957 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 22:33:04.958520  283957 start.go:563] Will wait 60s for crictl version
	I0831 22:33:04.958584  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:33:04.962128  283957 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:33:04.997059  283957 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 22:33:04.997167  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.045856  283957 ssh_runner.go:195] Run: crio --version
	I0831 22:33:05.092004  283957 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 22:33:05.094977  283957 cli_runner.go:164] Run: docker network inspect addons-926553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:33:05.112048  283957 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:33:05.116110  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.128026  283957 kubeadm.go:883] updating cluster {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:33:05.128170  283957 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:33:05.128234  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.208377  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.208421  283957 crio.go:433] Images already preloaded, skipping extraction
	I0831 22:33:05.208479  283957 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 22:33:05.246065  283957 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 22:33:05.246089  283957 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:33:05.246099  283957 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0831 22:33:05.246205  283957 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-926553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:33:05.246297  283957 ssh_runner.go:195] Run: crio config
	I0831 22:33:05.292734  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:05.292759  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:05.292771  283957 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:33:05.292794  283957 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-926553 NodeName:addons-926553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:33:05.293025  283957 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-926553"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:33:05.293106  283957 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:33:05.302182  283957 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:33:05.302257  283957 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:33:05.311092  283957 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 22:33:05.329236  283957 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:33:05.347791  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0831 22:33:05.366848  283957 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:33:05.370373  283957 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:33:05.381457  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:05.465768  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:05.479694  283957 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553 for IP: 192.168.49.2
	I0831 22:33:05.479717  283957 certs.go:194] generating shared ca certs ...
	I0831 22:33:05.479733  283957 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:05.479864  283957 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 22:33:06.370705  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt ...
	I0831 22:33:06.370800  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt: {Name:mk127fa4684d9b07fbbfe78fd379ac7f2858784d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.371022  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key ...
	I0831 22:33:06.371065  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key: {Name:mkaa1c85c29bc9b8e67687de42c28210df6897ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.372603  283957 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 22:33:06.601904  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt ...
	I0831 22:33:06.601936  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt: {Name:mkdc81b529896f489764dcced8efa122bc80e6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602125  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key ...
	I0831 22:33:06.602138  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key: {Name:mkd36c32182ba675bb26d2d1c2420f0531884885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:06.602761  283957 certs.go:256] generating profile certs ...
	I0831 22:33:06.602831  283957 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key
	I0831 22:33:06.602851  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt with IP's: []
	I0831 22:33:07.200696  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt ...
	I0831 22:33:07.200743  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: {Name:mk55d73b23a418e158fddd2a2029982fed955c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.200943  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key ...
	I0831 22:33:07.200989  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.key: {Name:mk59a6767b126a801e3c15dd1fd3a3348aa14ff1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.201084  283957 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3
	I0831 22:33:07.201105  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:33:07.643963  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 ...
	I0831 22:33:07.643994  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3: {Name:mk8845045369642c2652f6024489c05d54865b33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644178  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 ...
	I0831 22:33:07.644191  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3: {Name:mk69db76c63a333ce273b6b1150f927c3534bc21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.644723  283957 certs.go:381] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt
	I0831 22:33:07.644822  283957 certs.go:385] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key.38417bc3 -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key
	I0831 22:33:07.644885  283957 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key
	I0831 22:33:07.644904  283957 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt with IP's: []
	I0831 22:33:07.769112  283957 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt ...
	I0831 22:33:07.769146  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt: {Name:mk709a4df7e86ad0190ea4e7918008cb10101a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769717  283957 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key ...
	I0831 22:33:07.769737  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key: {Name:mk55ab13960a2f23e6e30c97ac70318ef038cdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:07.769938  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 22:33:07.769982  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 22:33:07.770019  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:33:07.770046  283957 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 22:33:07.770668  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:33:07.796259  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 22:33:07.828503  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:33:07.867326  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 22:33:07.892900  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:33:07.917006  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 22:33:07.941026  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:33:07.964770  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0831 22:33:07.989226  283957 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:33:08.021885  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:33:08.053952  283957 ssh_runner.go:195] Run: openssl version
	I0831 22:33:08.060101  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:33:08.070747  283957 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074388  283957 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.074466  283957 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:33:08.082225  283957 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:33:08.092117  283957 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:33:08.095591  283957 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:33:08.095645  283957 kubeadm.go:392] StartCluster: {Name:addons-926553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-926553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:33:08.095732  283957 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 22:33:08.095788  283957 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 22:33:08.141952  283957 cri.go:89] found id: ""
	I0831 22:33:08.142024  283957 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:33:08.151170  283957 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:33:08.160571  283957 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:33:08.160636  283957 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:33:08.169922  283957 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:33:08.169943  283957 kubeadm.go:157] found existing configuration files:
	
	I0831 22:33:08.170003  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:33:08.178997  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:33:08.179084  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:33:08.187643  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:33:08.196349  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:33:08.196437  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:33:08.205030  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.213907  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:33:08.213994  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:33:08.222476  283957 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:33:08.231658  283957 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:33:08.231726  283957 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:33:08.240283  283957 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:33:08.279889  283957 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:33:08.280060  283957 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:33:08.302891  283957 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:33:08.302989  283957 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0831 22:33:08.303047  283957 kubeadm.go:310] OS: Linux
	I0831 22:33:08.303109  283957 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:33:08.303175  283957 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:33:08.303241  283957 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:33:08.303307  283957 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:33:08.303382  283957 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:33:08.303472  283957 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:33:08.303576  283957 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:33:08.303659  283957 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:33:08.303742  283957 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:33:08.375106  283957 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:33:08.375280  283957 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:33:08.375404  283957 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:33:08.381947  283957 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:33:08.385255  283957 out.go:235]   - Generating certificates and keys ...
	I0831 22:33:08.385428  283957 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:33:08.385523  283957 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:33:08.637437  283957 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:33:09.463131  283957 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:33:10.033346  283957 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:33:10.906857  283957 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:33:11.453764  283957 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:33:11.454108  283957 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.062393  283957 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:33:12.062743  283957 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-926553 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:33:12.309286  283957 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:33:12.573925  283957 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:33:12.914344  283957 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:33:12.914632  283957 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:33:13.308464  283957 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:33:13.644764  283957 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:33:14.238434  283957 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:33:14.678365  283957 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:33:15.169684  283957 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:33:15.170746  283957 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:33:15.174253  283957 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:33:15.177263  283957 out.go:235]   - Booting up control plane ...
	I0831 22:33:15.177380  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:33:15.177460  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:33:15.178516  283957 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:33:15.190024  283957 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:33:15.196959  283957 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:33:15.197061  283957 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:33:15.294087  283957 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:33:15.294208  283957 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:33:16.295207  283957 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00118568s
	I0831 22:33:16.295299  283957 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:33:22.297225  283957 kubeadm.go:310] [api-check] The API server is healthy after 6.002301756s
	I0831 22:33:22.317717  283957 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:33:22.333223  283957 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:33:22.356793  283957 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:33:22.356989  283957 kubeadm.go:310] [mark-control-plane] Marking the node addons-926553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:33:22.368934  283957 kubeadm.go:310] [bootstrap-token] Using token: bpizuk.5bt7ue9fr9w4aczf
	I0831 22:33:22.373429  283957 out.go:235]   - Configuring RBAC rules ...
	I0831 22:33:22.373568  283957 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:33:22.379902  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:33:22.391608  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:33:22.397570  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:33:22.401429  283957 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:33:22.405725  283957 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:33:22.704690  283957 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:33:23.180935  283957 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:33:23.704316  283957 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:33:23.707745  283957 kubeadm.go:310] 
	I0831 22:33:23.707828  283957 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:33:23.707837  283957 kubeadm.go:310] 
	I0831 22:33:23.707924  283957 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:33:23.707936  283957 kubeadm.go:310] 
	I0831 22:33:23.707962  283957 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:33:23.708048  283957 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:33:23.708128  283957 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:33:23.708138  283957 kubeadm.go:310] 
	I0831 22:33:23.708191  283957 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:33:23.708200  283957 kubeadm.go:310] 
	I0831 22:33:23.708251  283957 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:33:23.708259  283957 kubeadm.go:310] 
	I0831 22:33:23.708311  283957 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:33:23.708384  283957 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:33:23.708476  283957 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:33:23.708490  283957 kubeadm.go:310] 
	I0831 22:33:23.708572  283957 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:33:23.708648  283957 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:33:23.708655  283957 kubeadm.go:310] 
	I0831 22:33:23.708737  283957 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.708860  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c \
	I0831 22:33:23.708888  283957 kubeadm.go:310] 	--control-plane 
	I0831 22:33:23.708893  283957 kubeadm.go:310] 
	I0831 22:33:23.708977  283957 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:33:23.708982  283957 kubeadm.go:310] 
	I0831 22:33:23.709068  283957 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpizuk.5bt7ue9fr9w4aczf \
	I0831 22:33:23.709169  283957 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3593c859f62fc352e4288d7593bda1bad3208e885169afef8f46acbefa784a7c 
	I0831 22:33:23.712617  283957 kubeadm.go:310] W0831 22:33:08.276569    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.712923  283957 kubeadm.go:310] W0831 22:33:08.277503    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:33:23.713163  283957 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0831 22:33:23.713299  283957 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:33:23.713314  283957 cni.go:84] Creating CNI manager for ""
	I0831 22:33:23.713322  283957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:33:23.716282  283957 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0831 22:33:23.719220  283957 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0831 22:33:23.723271  283957 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0831 22:33:23.723293  283957 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0831 22:33:23.741607  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0831 22:33:24.052823  283957 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:33:24.052918  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.052970  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-926553 minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-926553 minikube.k8s.io/primary=true
	I0831 22:33:24.230141  283957 ops.go:34] apiserver oom_adj: -16
	I0831 22:33:24.230269  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:24.730397  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.230993  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:25.730610  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.230407  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:26.730761  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.231064  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:27.730886  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.230560  283957 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:33:28.321873  283957 kubeadm.go:1113] duration metric: took 4.26902395s to wait for elevateKubeSystemPrivileges
	I0831 22:33:28.321901  283957 kubeadm.go:394] duration metric: took 20.226260277s to StartCluster
	I0831 22:33:28.321917  283957 settings.go:142] acquiring lock: {Name:mkadbc7d53c5858a38d57ec152e52037ebee242b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.322035  283957 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:33:28.322400  283957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:33:28.323046  283957 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 22:33:28.323174  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:33:28.323438  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.323475  283957 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:33:28.323555  283957 addons.go:69] Setting yakd=true in profile "addons-926553"
	I0831 22:33:28.323574  283957 addons.go:234] Setting addon yakd=true in "addons-926553"
	I0831 22:33:28.323597  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.324068  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.324839  283957 addons.go:69] Setting cloud-spanner=true in profile "addons-926553"
	I0831 22:33:28.324866  283957 addons.go:234] Setting addon cloud-spanner=true in "addons-926553"
	I0831 22:33:28.324890  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.325338  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.325583  283957 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-926553"
	I0831 22:33:28.325617  283957 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-926553"
	I0831 22:33:28.325650  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326088  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.326414  283957 addons.go:69] Setting registry=true in profile "addons-926553"
	I0831 22:33:28.326440  283957 addons.go:234] Setting addon registry=true in "addons-926553"
	I0831 22:33:28.326465  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.326854  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.329500  283957 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-926553"
	I0831 22:33:28.329573  283957 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:28.329606  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.330028  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342354  283957 addons.go:69] Setting default-storageclass=true in profile "addons-926553"
	I0831 22:33:28.342397  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-926553"
	I0831 22:33:28.342712  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.342891  283957 addons.go:69] Setting storage-provisioner=true in profile "addons-926553"
	I0831 22:33:28.342929  283957 addons.go:234] Setting addon storage-provisioner=true in "addons-926553"
	I0831 22:33:28.342990  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.349869  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.360101  283957 addons.go:69] Setting gcp-auth=true in profile "addons-926553"
	I0831 22:33:28.360166  283957 mustload.go:65] Loading cluster: addons-926553
	I0831 22:33:28.360443  283957 config.go:182] Loaded profile config "addons-926553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:33:28.360907  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.366186  283957 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-926553"
	I0831 22:33:28.366367  283957 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-926553"
	I0831 22:33:28.366876  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.375213  283957 addons.go:69] Setting ingress=true in profile "addons-926553"
	I0831 22:33:28.375277  283957 addons.go:234] Setting addon ingress=true in "addons-926553"
	I0831 22:33:28.375340  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.376302  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.380625  283957 addons.go:69] Setting volcano=true in profile "addons-926553"
	I0831 22:33:28.380724  283957 addons.go:234] Setting addon volcano=true in "addons-926553"
	I0831 22:33:28.380800  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.381420  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.394994  283957 addons.go:69] Setting ingress-dns=true in profile "addons-926553"
	I0831 22:33:28.395035  283957 addons.go:234] Setting addon ingress-dns=true in "addons-926553"
	I0831 22:33:28.395105  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.395705  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.401028  283957 addons.go:69] Setting volumesnapshots=true in profile "addons-926553"
	I0831 22:33:28.401089  283957 addons.go:234] Setting addon volumesnapshots=true in "addons-926553"
	I0831 22:33:28.401140  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.401758  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.402549  283957 out.go:177] * Verifying Kubernetes components...
	I0831 22:33:28.428671  283957 addons.go:69] Setting inspektor-gadget=true in profile "addons-926553"
	I0831 22:33:28.428730  283957 addons.go:234] Setting addon inspektor-gadget=true in "addons-926553"
	I0831 22:33:28.428784  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.429708  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.482979  283957 addons.go:69] Setting metrics-server=true in profile "addons-926553"
	I0831 22:33:28.483022  283957 addons.go:234] Setting addon metrics-server=true in "addons-926553"
	I0831 22:33:28.483067  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.483527  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.544912  283957 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:33:28.556676  283957 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:33:28.590337  283957 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:33:28.592737  283957 addons.go:234] Setting addon default-storageclass=true in "addons-926553"
	I0831 22:33:28.592814  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.593533  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.602703  283957 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:33:28.613912  283957 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:33:28.616792  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:33:28.616842  283957 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:33:28.616938  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.642744  283957 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:33:28.642778  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:33:28.642878  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.643214  283957 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:33:28.643541  283957 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:28.643555  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:33:28.643629  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.673321  283957 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:33:28.673653  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:33:28.676085  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:28.676117  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:33:28.676204  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.680157  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:33:28.682935  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W0831 22:33:28.685152  283957 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0831 22:33:28.688167  283957 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:28.688191  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:33:28.688265  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.711171  283957 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:33:28.711327  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:33:28.711525  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:33:28.713867  283957 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:33:28.713897  283957 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:33:28.714009  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.716525  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:33:28.716567  283957 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:33:28.716656  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.730720  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:33:28.736851  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.740033  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:28.742712  283957 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:33:28.743079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:28.743093  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:33:28.743174  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.743485  283957 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:33:28.743695  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.746617  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:33:28.746887  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:33:28.746921  283957 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:33:28.746978  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.780079  283957 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:28.780109  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:33:28.780197  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.789453  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:33:28.790925  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:33:28.793675  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:33:28.799676  283957 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:33:28.803611  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:33:28.803643  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:33:28.803743  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.812459  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.863289  283957 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-926553"
	I0831 22:33:28.863352  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:28.863920  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:28.868300  283957 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:28.868325  283957 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:33:28.868620  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:28.883317  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.949248  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:28.960803  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.002979  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.003648  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.046634  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.047268  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.055178  283957 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:33:29.055568  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.061509  283957 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:33:29.064558  283957 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.064583  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:33:29.064648  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:29.066321  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.088665  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.089600  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.108712  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:29.442641  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:33:29.442677  283957 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:33:29.526255  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:33:29.530947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:33:29.533064  283957 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:33:29.533105  283957 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:33:29.534377  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:33:29.596562  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:33:29.596599  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:33:29.613705  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:33:29.630607  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:33:29.647426  283957 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:33:29.647458  283957 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:33:29.653219  283957 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.653263  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:33:29.657539  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:33:29.660517  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:33:29.660568  283957 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:33:29.663695  283957 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.073312217s)
	I0831 22:33:29.663842  283957 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:33:29.666078  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:33:29.710627  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:33:29.710667  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:33:29.735336  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:33:29.735373  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:33:29.784696  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:33:29.784736  283957 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:33:29.855640  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:33:29.877050  283957 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:33:29.877099  283957 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:33:29.904145  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:33:29.904181  283957 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:33:29.911988  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:33:29.912025  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:33:29.945936  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:33:29.945983  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:33:29.979809  283957 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:29.979844  283957 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:33:30.081879  283957 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:33:30.081924  283957 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:33:30.094433  283957 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:33:30.094470  283957 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:33:30.121606  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:33:30.121648  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:33:30.147465  283957 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.147495  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:33:30.194891  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:33:30.335224  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:33:30.335253  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:33:30.357845  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:33:30.370585  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:33:30.370614  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:33:30.380434  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:33:30.380480  283957 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:33:30.470604  283957 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:33:30.470632  283957 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:33:30.474717  283957 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:33:30.474743  283957 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:33:30.480308  283957 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.480332  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:33:30.551614  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:33:30.551645  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:33:30.555526  283957 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:33:30.555551  283957 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:33:30.572488  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:30.626735  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:33:30.626772  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:33:30.659143  283957 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.659168  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:33:30.708306  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:33:30.708339  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:33:30.751947  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:33:30.779486  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:33:30.779512  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:33:30.883168  283957 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:30.883208  283957 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:33:31.034072  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:33:32.347271  283957 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.557782579s)
	I0831 22:33:32.347302  283957 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:33:32.348296  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.821995008s)
	I0831 22:33:33.626333  283957 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-926553" context rescaled to 1 replicas
	I0831 22:33:34.257701  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.726689759s)
	I0831 22:33:34.257836  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.72343282s)
	I0831 22:33:35.750704  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.136952511s)
	I0831 22:33:35.750778  283957 addons.go:475] Verifying addon ingress=true in "addons-926553"
	I0831 22:33:35.750934  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.120293953s)
	I0831 22:33:35.751173  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.093604816s)
	I0831 22:33:35.751232  283957 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.087374745s)
	I0831 22:33:35.751352  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.085238138s)
	I0831 22:33:35.751534  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.89586536s)
	I0831 22:33:35.751557  283957 addons.go:475] Verifying addon registry=true in "addons-926553"
	I0831 22:33:35.752026  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.557102507s)
	I0831 22:33:35.752048  283957 addons.go:475] Verifying addon metrics-server=true in "addons-926553"
	I0831 22:33:35.752087  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.394213157s)
	I0831 22:33:35.752485  283957 node_ready.go:35] waiting up to 6m0s for node "addons-926553" to be "Ready" ...
	I0831 22:33:35.753392  283957 out.go:177] * Verifying ingress addon...
	I0831 22:33:35.753389  283957 out.go:177] * Verifying registry addon...
	I0831 22:33:35.755348  283957 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-926553 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:33:35.757767  283957 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:33:35.757777  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:33:35.790994  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:33:35.791083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:35.805166  283957 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:33:35.805241  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:35.829747  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.25721674s)
	W0831 22:33:35.830055  283957 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.830104  283957 retry.go:31] will retry after 224.217796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:33:35.829894  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.077890025s)
	W0831 22:33:35.831762  283957 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0831 22:33:36.055322  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:33:36.093372  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.059244383s)
	I0831 22:33:36.093453  283957 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-926553"
	I0831 22:33:36.096487  283957 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:33:36.100111  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:33:36.115482  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:33:36.115552  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.263976  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.265062  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:36.604587  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:36.787244  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:36.788063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.104478  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.265822  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.267285  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:37.604559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:37.756432  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:37.765094  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:37.766439  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.119367  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.262590  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.263797  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:38.604697  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:38.763609  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:38.764734  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.104910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.268592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.269044  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.283217  283957 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.227800168s)
	I0831 22:33:39.608539  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:39.699330  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:33:39.699446  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.716174  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.763187  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:39.763930  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:39.822207  283957 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:33:39.846744  283957 addons.go:234] Setting addon gcp-auth=true in "addons-926553"
	I0831 22:33:39.846795  283957 host.go:66] Checking if "addons-926553" exists ...
	I0831 22:33:39.847250  283957 cli_runner.go:164] Run: docker container inspect addons-926553 --format={{.State.Status}}
	I0831 22:33:39.875523  283957 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:33:39.875573  283957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-926553
	I0831 22:33:39.898490  283957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/addons-926553/id_rsa Username:docker}
	I0831 22:33:39.991759  283957 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:33:39.994410  283957 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:33:39.996970  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:33:39.996996  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:33:40.029853  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:33:40.029886  283957 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:33:40.054335  283957 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.054357  283957 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:33:40.077923  283957 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:33:40.110092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.256734  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:40.262779  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.263788  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:40.618485  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:40.635469  283957 addons.go:475] Verifying addon gcp-auth=true in "addons-926553"
	I0831 22:33:40.638250  283957 out.go:177] * Verifying gcp-auth addon...
	I0831 22:33:40.641904  283957 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:33:40.717924  283957 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:33:40.717949  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:40.760808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:40.761737  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.103631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.147102  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.261846  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.262577  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:41.605179  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:41.645765  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:41.762543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:41.764051  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.105362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.148283  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.258237  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:42.263214  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:42.264306  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.604818  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:42.646007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:42.762250  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:42.762606  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.103968  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.145529  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.261669  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.262507  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:43.603902  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:43.645804  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:43.762089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:43.762820  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.104008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.145229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.261278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.262098  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:44.604072  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:44.645225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:44.755790  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:44.762238  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:44.763675  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.119481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.151439  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.262923  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:45.263585  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.603923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:45.645062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:45.762245  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:45.763179  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.103665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.145991  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.262108  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.262871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:46.603987  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:46.645848  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:46.755967  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:46.762356  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:46.763040  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.103999  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.145133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.265067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.265999  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:47.604241  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:47.645521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:47.761239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:47.762226  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.104502  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.261871  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:48.262973  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.604572  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:48.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:48.762598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:48.763120  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.104271  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.145932  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.256720  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:49.262226  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:49.263641  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.604683  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:49.645947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:49.761803  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:49.762015  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.103842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.145422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.261604  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:50.262384  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.604492  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:50.645631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:50.762236  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:50.762361  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.104382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.145709  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.261382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.262159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:51.604037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:51.645599  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:51.756631  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:51.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:51.762943  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.103840  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.146303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.260993  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.262050  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:52.604518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:52.645695  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:52.762149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:52.762978  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.104308  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.145453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.262149  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:53.262946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.604459  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:53.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:53.762137  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:53.762727  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.104542  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.145161  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.255923  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:54.262062  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.263015  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:54.603912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:54.645424  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:54.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:54.763411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.104967  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.145553  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.262546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:55.262785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.604748  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:55.645583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:55.761826  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:55.763402  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.105089  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.146463  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.256974  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:56.263076  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.263723  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:56.606473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:56.647735  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:56.764781  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:56.766164  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.104318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.146098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.269923  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.271223  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:57.604825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:57.645919  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:57.763180  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:57.763592  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.104174  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.145739  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.261942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.262811  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:58.603886  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:58.645351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:58.757020  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:33:58.761675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:58.763460  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.104110  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.145526  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.262377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:33:59.262612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.604341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:33:59.645727  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:33:59.762132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:33:59.762980  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.136282  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.175701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.297607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.298427  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:00.605093  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:00.645870  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:00.757140  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:00.762169  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:00.763557  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.104348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.146225  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.261098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.262282  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:01.603884  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:01.645426  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:01.762105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:01.762957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.104192  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.145434  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.262134  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.262894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:02.603513  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:02.645138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:02.762333  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:02.763186  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.104291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.145545  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.256690  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:03.262509  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.263063  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:03.604219  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:03.645652  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:03.761550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:03.763199  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.103986  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.145092  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.260906  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:04.261910  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.604129  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:04.645678  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:04.762705  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:04.762793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.104713  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.145523  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.262711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:05.263142  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.603656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:05.645384  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:05.756593  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:05.762220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:05.762442  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.104276  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.145977  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.263109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.264246  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:06.605053  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:06.645105  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:06.761724  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:06.762593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.104549  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.146001  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.262265  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.262528  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:07.603862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:07.645120  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:07.762233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:07.762720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.104365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.145901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.256630  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:08.262630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.263422  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:08.603598  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:08.645197  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:08.761304  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:08.762056  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.104651  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.145806  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.262057  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.262888  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:09.604550  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:09.645470  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:09.762054  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:09.763110  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.104284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.257522  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:10.261362  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.261939  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:10.604131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:10.646061  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:10.761374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:10.762267  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.104686  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.145067  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.262003  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.262977  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:11.603815  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:11.645555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:11.762188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:11.762588  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.104640  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.145951  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.261659  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:12.262461  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.604373  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:12.645942  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:12.757266  283957 node_ready.go:53] node "addons-926553" has status "Ready":"False"
	I0831 22:34:12.762383  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:12.762661  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.103567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.146021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.262280  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.262859  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:13.604082  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:13.650984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:13.761311  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:13.762021  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.104043  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.145580  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.261335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:14.262064  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.603947  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:14.646679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:14.762765  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:14.762778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.117766  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.153240  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.267487  283957 node_ready.go:49] node "addons-926553" has status "Ready":"True"
	I0831 22:34:15.267564  283957 node_ready.go:38] duration metric: took 39.514789095s for node "addons-926553" to be "Ready" ...
	I0831 22:34:15.267592  283957 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:34:15.275732  283957 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:34:15.275809  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:15.276442  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.280854  283957 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:15.629987  283957 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:34:15.630065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:15.668193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:15.787659  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:15.789778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.105852  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.145825  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.278884  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.280021  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.605440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:16.645318  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:16.762858  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:16.765023  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:16.787617  283957 pod_ready.go:93] pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.787642  283957 pod_ready.go:82] duration metric: took 1.506753163s for pod "coredns-6f6b679f8f-sljbt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.787677  283957 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794111  283957 pod_ready.go:93] pod "etcd-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.794139  283957 pod_ready.go:82] duration metric: took 6.444642ms for pod "etcd-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.794155  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799542  283957 pod_ready.go:93] pod "kube-apiserver-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.799569  283957 pod_ready.go:82] duration metric: took 5.386535ms for pod "kube-apiserver-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.799580  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806679  283957 pod_ready.go:93] pod "kube-controller-manager-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.806707  283957 pod_ready.go:82] duration metric: took 7.118805ms for pod "kube-controller-manager-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.806721  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857188  283957 pod_ready.go:93] pod "kube-proxy-2x2mt" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:16.857218  283957 pod_ready.go:82] duration metric: took 50.489915ms for pod "kube-proxy-2x2mt" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:16.857230  283957 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.105581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.146191  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.258600  283957 pod_ready.go:93] pod "kube-scheduler-addons-926553" in "kube-system" namespace has status "Ready":"True"
	I0831 22:34:17.258669  283957 pod_ready.go:82] duration metric: took 401.429253ms for pod "kube-scheduler-addons-926553" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.258694  283957 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:34:17.261667  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:17.262687  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.604862  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:17.646272  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:17.764936  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:17.765793  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.107931  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.207202  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.302173  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:18.302637  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.606559  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:18.646357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:18.775904  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:18.780122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.110151  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.146402  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.272716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:19.275660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.278834  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:19.607312  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:19.646989  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:19.764462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:19.765340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.108138  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.158436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.265037  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:20.265857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.607204  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:20.649365  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:20.766184  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:20.766778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.117188  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.264649  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.267229  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.605997  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:21.646189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:21.769252  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:21.776432  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:21.779045  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:22.105797  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.205291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.270938  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.272159  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:22.606720  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:22.645319  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:22.768212  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:22.769045  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.105481  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.146025  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.264716  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.266628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.604946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:23.645376  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:23.766158  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:23.767067  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:23.797542  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:24.105732  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.147335  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.266279  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.267261  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:24.606800  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:24.646677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:24.766259  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:24.767462  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.106518  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.205453  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.314730  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.316362  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:25.607028  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:25.650341  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:25.770511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:25.773834  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.104894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.145895  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.263752  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.265016  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:26.267354  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:26.605178  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:26.645897  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:26.767644  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:26.768292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.105737  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.145850  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.264918  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.265889  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:27.605106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:27.645943  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:27.764477  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:27.766607  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.107629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.207239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.263084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.264194  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.605775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:28.646375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:28.762388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:28.764546  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:28.767472  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:29.106278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.146524  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.265912  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.268490  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:29.605745  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:29.646867  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:29.765756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:29.772314  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.122548  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.148292  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.279047  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.280259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.604607  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:30.645863  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:30.765718  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:30.766955  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:30.770653  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:31.107084  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.145821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.265330  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.266346  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:31.606351  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:31.646041  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:31.762658  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:31.765467  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.105934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.145601  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.264777  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:32.266337  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.605229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:32.646223  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:32.774989  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:32.785303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:32.785784  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.105083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.146512  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.263890  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.265992  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:33.606498  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:33.645662  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:33.763811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:33.764819  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.105423  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.145701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.266956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.269628  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:34.605901  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:34.645149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:34.763985  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:34.765038  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.112775  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.147243  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.271686  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:35.273029  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.277897  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:35.605975  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:35.645757  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:35.764098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:35.764377  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.106052  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.146574  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.266738  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.269371  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:36.605156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:36.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:36.766567  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:36.767069  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.105482  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.145408  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.262842  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.264940  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.605630  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:37.645579  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:37.763903  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:37.764638  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:37.768030  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:38.105602  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.145844  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.279984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:38.281288  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.606189  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:38.645328  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:38.766976  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:38.768517  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.107588  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.145837  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.267811  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.269043  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.604990  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:39.645894  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:39.764577  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:39.765987  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:39.783324  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:40.110946  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.149038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.263916  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:40.264452  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.605702  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:40.646035  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:40.762583  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:40.765830  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.104722  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.146251  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.267893  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.270170  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:41.605079  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:41.646109  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:41.766428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:41.767660  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.108325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.152284  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.277162  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:42.278233  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.280340  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:42.605427  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:42.645085  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:42.764212  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:42.764388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.105237  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.145656  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.264399  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:43.265176  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.605756  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:43.646160  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:43.767679  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:43.777857  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.106039  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.146446  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.299193  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.309733  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:44.326060  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:44.605473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:44.645672  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:44.763034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:44.764053  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.111264  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.159920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.269565  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:45.270011  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.605305  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:45.646239  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:45.778410  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:45.779825  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.104643  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.146156  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.264631  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.267013  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.622647  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:46.646343  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:46.764083  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:46.765335  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:46.769473  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:47.105381  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.145795  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.263471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.265096  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:47.605821  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:47.646133  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:47.763675  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:47.765088  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.105731  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.146388  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.277910  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.279115  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.607534  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:48.646422  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:48.771915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:48.773860  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:48.783304  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:49.105357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.146229  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.265098  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.266325  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:49.606355  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:49.645828  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:49.775820  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:49.779206  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.107042  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.146396  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.265357  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.268892  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:50.606663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:50.649461  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:50.766106  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:50.768357  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.106471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.145827  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.263868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.273856  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:51.276035  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:51.605984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:51.646501  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:51.770956  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:51.775016  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.105268  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.145877  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.263405  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:52.606281  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:52.646369  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:52.774325  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:52.775093  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.106374  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.146473  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.267665  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:53.269478  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.276369  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:53.607786  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:53.705941  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:53.808463  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:53.808930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.106742  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.146131  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.262778  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.263743  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:54.605780  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:54.645489  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:54.763543  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:54.764691  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.105073  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.146671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.263581  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.264593  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.604808  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:55.645627  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:55.765957  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:55.767629  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:55.774463  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:56.106436  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.147428  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.274490  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.276298  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:56.606475  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:56.663836  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:56.768576  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:56.770804  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.105671  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.146711  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.264259  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.270150  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.607038  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:57.645905  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:57.766741  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:57.769544  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:57.777959  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:34:58.105648  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.146227  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.265054  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:58.265762  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.605480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:58.646483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:58.766211  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:58.768130  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.105789  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.146597  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.265677  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.269145  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:34:59.605347  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:34:59.645340  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:34:59.765278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:34:59.767138  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.159210  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.164293  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.328550  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:00.329744  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.335942  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:00.606480  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:00.647703  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:00.763533  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:35:00.765948  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.106323  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.146291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.264390  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:01.265200  283957 kapi.go:107] duration metric: took 1m25.507422226s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:35:01.612483  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:01.646438  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:01.767506  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.106814  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.206008  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.262315  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.606382  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:02.645915  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:02.764109  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:02.766427  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:03.105521  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.145663  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.262337  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:03.605065  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:03.645471  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:03.763085  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.105575  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.146506  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.265127  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.622274  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:04.650220  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:04.763587  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:04.771154  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:05.107755  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.146930  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.263894  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:05.605375  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:05.645868  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:05.764781  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.105494  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.146233  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.262353  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:06.609706  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:06.646514  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:06.766654  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.105395  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.147002  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.265286  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:07.269347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:07.605980  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:07.645479  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:07.766524  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.105796  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.146353  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.280220  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:08.606605  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:08.645535  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:08.764454  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.105835  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.145440  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.262310  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.605511  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:09.646558  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:09.765787  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:09.767713  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:10.107122  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.146046  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.271694  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:10.606278  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:10.645926  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:10.767543  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.106465  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.150614  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.263411  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:11.610421  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:11.653984  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:35:11.768938  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.105749  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.205140  283957 kapi.go:107] duration metric: took 1m31.563232697s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:35:12.208102  283957 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-926553 cluster.
	I0831 22:35:12.210660  283957 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:35:12.213274  283957 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:35:12.264022  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:12.265955  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:12.604934  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:12.763295  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.105032  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.262133  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:13.606171  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:13.764828  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.106701  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.261801  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:14.604865  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:14.765083  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:14.771193  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.110555  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.271540  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:15.605431  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:15.766094  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.110167  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.267927  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.606034  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:16.764905  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:16.766036  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:17.105448  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.264901  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:17.604881  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:17.764247  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.107297  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.263113  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.607207  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:18.763761  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:18.767424  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:19.105348  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.265466  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:19.606177  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:19.772514  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.107301  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.265082  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.606295  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:20.762817  283957 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:35:20.769525  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:21.106007  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:21.262749  283957 kapi.go:107] duration metric: took 1m45.504982271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:35:21.610332  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.123132  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:22.606681  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.106303  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:23.265347  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:23.610785  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.108937  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:24.604883  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.106603  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.266133  283957 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"False"
	I0831 22:35:25.605612  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:25.786474  283957 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.786506  283957 pod_ready.go:82] duration metric: took 1m8.527790413s for pod "metrics-server-84c5f94fbc-zwvsl" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.786520  283957 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795290  283957 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace has status "Ready":"True"
	I0831 22:35:25.795318  283957 pod_ready.go:82] duration metric: took 8.78951ms for pod "nvidia-device-plugin-daemonset-9xvjf" in "kube-system" namespace to be "Ready" ...
	I0831 22:35:25.795341  283957 pod_ready.go:39] duration metric: took 1m10.52768296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:35:25.795356  283957 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:35:25.795434  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:25.795702  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:25.886248  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:25.886322  283957 cri.go:89] found id: ""
	I0831 22:35:25.886358  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:25.886451  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.890246  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:25.890401  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:25.961145  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:25.961169  283957 cri.go:89] found id: ""
	I0831 22:35:25.961177  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:25.961232  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:25.971647  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:25.971720  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:26.081420  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.081442  283957 cri.go:89] found id: ""
	I0831 22:35:26.081450  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:26.081509  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.086692  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:26.086769  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:26.106149  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.187973  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:26.187996  283957 cri.go:89] found id: ""
	I0831 22:35:26.188004  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:26.188061  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.192877  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:26.192951  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:26.297630  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.297653  283957 cri.go:89] found id: ""
	I0831 22:35:26.297662  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:26.297719  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.305863  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:26.305932  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:26.386494  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.386518  283957 cri.go:89] found id: ""
	I0831 22:35:26.386526  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:26.386596  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.391560  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:26.391632  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:26.446888  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.446911  283957 cri.go:89] found id: ""
	I0831 22:35:26.446919  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:26.446974  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:26.452924  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:26.452953  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:26.520818  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:26.520850  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:26.579607  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:26.579638  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:26.605871  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:26.676077  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:26.676186  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:26.772215  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:26.772299  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:26.885704  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:26.885743  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:26.971800  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972187  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972448  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.972661  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:26.972903  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:26.973166  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.026028  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:27.026122  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:27.121170  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:27.306579  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:27.306611  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:27.381339  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:27.381381  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:27.432923  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:27.432958  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:27.505422  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:27.505456  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:27.523608  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:27.523691  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:27.594979  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595049  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:27.595118  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:27.595127  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595134  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595140  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:27.595148  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:27.595158  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:27.595169  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:27.595175  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:27.606018  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.107291  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:28.607326  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.107920  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:29.605540  283957 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:35:30.116852  283957 kapi.go:107] duration metric: took 1m54.016739242s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:35:30.119299  283957 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0831 22:35:30.123306  283957 addons.go:510] duration metric: took 2m1.799821522s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0831 22:35:37.595431  283957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:35:37.609347  283957 api_server.go:72] duration metric: took 2m9.286263895s to wait for apiserver process to appear ...
	I0831 22:35:37.609372  283957 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:35:37.609409  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:37.609464  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:37.653375  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:37.653399  283957 cri.go:89] found id: ""
	I0831 22:35:37.653408  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:37.653466  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.657014  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:37.657091  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:37.702049  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:37.702081  283957 cri.go:89] found id: ""
	I0831 22:35:37.702090  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:37.702148  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.705948  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:37.706022  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:37.743979  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.744002  283957 cri.go:89] found id: ""
	I0831 22:35:37.744010  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:37.744067  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.748167  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:37.748235  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:37.787366  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:37.787387  283957 cri.go:89] found id: ""
	I0831 22:35:37.787394  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:37.787456  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.791268  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:37.791418  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:37.839012  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:37.839032  283957 cri.go:89] found id: ""
	I0831 22:35:37.839040  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:37.839095  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.842773  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:37.842857  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:37.882906  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:37.882928  283957 cri.go:89] found id: ""
	I0831 22:35:37.882936  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:37.883016  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.886592  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:37.886701  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:37.929003  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:37.929026  283957 cri.go:89] found id: ""
	I0831 22:35:37.929034  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:37.929089  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:37.932647  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:37.932675  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:37.976634  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:37.976663  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:38.029768  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:38.029845  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:38.089134  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:38.089209  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:38.133397  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:38.133434  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:38.191973  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:38.192003  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:38.254593  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.254790  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255021  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255206  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.255426  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.255652  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.293315  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:38.293348  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:38.309324  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:38.309354  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:38.449465  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:38.449541  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:38.557894  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:38.557935  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:38.613020  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:38.613053  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:38.667543  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:38.667580  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:38.774202  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774279  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:38.774360  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:38.774399  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774433  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774476  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:38.774510  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:38.774544  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:38.774579  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:38.774586  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:48.775832  283957 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:35:48.783566  283957 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:35:48.786206  283957 api_server.go:141] control plane version: v1.31.0
	I0831 22:35:48.786241  283957 api_server.go:131] duration metric: took 11.176861075s to wait for apiserver health ...
	I0831 22:35:48.786251  283957 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:35:48.786273  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 22:35:48.786338  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 22:35:48.824896  283957 cri.go:89] found id: "4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:48.824918  283957 cri.go:89] found id: ""
	I0831 22:35:48.824927  283957 logs.go:276] 1 containers: [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7]
	I0831 22:35:48.824984  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.828359  283957 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 22:35:48.828472  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 22:35:48.869702  283957 cri.go:89] found id: "a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:48.869727  283957 cri.go:89] found id: ""
	I0831 22:35:48.869735  283957 logs.go:276] 1 containers: [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678]
	I0831 22:35:48.869811  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.873344  283957 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 22:35:48.873422  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 22:35:48.912098  283957 cri.go:89] found id: "c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:48.912121  283957 cri.go:89] found id: ""
	I0831 22:35:48.912129  283957 logs.go:276] 1 containers: [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb]
	I0831 22:35:48.912185  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.915599  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 22:35:48.915669  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 22:35:48.958620  283957 cri.go:89] found id: "29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:48.958644  283957 cri.go:89] found id: ""
	I0831 22:35:48.958653  283957 logs.go:276] 1 containers: [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b]
	I0831 22:35:48.958744  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:48.962169  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 22:35:48.962244  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 22:35:49.006023  283957 cri.go:89] found id: "38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.006048  283957 cri.go:89] found id: ""
	I0831 22:35:49.006056  283957 logs.go:276] 1 containers: [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87]
	I0831 22:35:49.006118  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.011545  283957 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 22:35:49.011654  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 22:35:49.054445  283957 cri.go:89] found id: "cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.054469  283957 cri.go:89] found id: ""
	I0831 22:35:49.054478  283957 logs.go:276] 1 containers: [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0]
	I0831 22:35:49.054566  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.058214  283957 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 22:35:49.058292  283957 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 22:35:49.096178  283957 cri.go:89] found id: "7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.096203  283957 cri.go:89] found id: ""
	I0831 22:35:49.096211  283957 logs.go:276] 1 containers: [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171]
	I0831 22:35:49.096265  283957 ssh_runner.go:195] Run: which crictl
	I0831 22:35:49.099723  283957 logs.go:123] Gathering logs for kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] ...
	I0831 22:35:49.099762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87"
	I0831 22:35:49.139017  283957 logs.go:123] Gathering logs for kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] ...
	I0831 22:35:49.139048  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0"
	I0831 22:35:49.212561  283957 logs.go:123] Gathering logs for kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] ...
	I0831 22:35:49.212599  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171"
	I0831 22:35:49.257845  283957 logs.go:123] Gathering logs for container status ...
	I0831 22:35:49.257877  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 22:35:49.305619  283957 logs.go:123] Gathering logs for describe nodes ...
	I0831 22:35:49.305649  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 22:35:49.445076  283957 logs.go:123] Gathering logs for kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] ...
	I0831 22:35:49.445108  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7"
	I0831 22:35:49.511728  283957 logs.go:123] Gathering logs for kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] ...
	I0831 22:35:49.511762  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b"
	I0831 22:35:49.559678  283957 logs.go:123] Gathering logs for coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] ...
	I0831 22:35:49.559715  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb"
	I0831 22:35:49.600032  283957 logs.go:123] Gathering logs for CRI-O ...
	I0831 22:35:49.600066  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 22:35:49.699340  283957 logs.go:123] Gathering logs for kubelet ...
	I0831 22:35:49.699382  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0831 22:35:49.762989  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006611    1497 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763218  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763449  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.763640  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.763860  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.764086  283957 logs.go:138] Found kubelet problem: Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.804313  283957 logs.go:123] Gathering logs for dmesg ...
	I0831 22:35:49.804351  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 22:35:49.820979  283957 logs.go:123] Gathering logs for etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] ...
	I0831 22:35:49.821065  283957 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678"
	I0831 22:35:49.873854  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.873890  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0831 22:35:49.873974  283957 out.go:270] X Problems detected in kubelet:
	W0831 22:35:49.873986  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006663    1497 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874019  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006673    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874034  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: W0831 22:34:15.006614    1497 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-926553" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-926553' and this object
	W0831 22:35:49.874045  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006691    1497 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	W0831 22:35:49.874060  283957 out.go:270]   Aug 31 22:34:15 addons-926553 kubelet[1497]: E0831 22:34:15.006699    1497 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-926553\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-926553' and this object" logger="UnhandledError"
	I0831 22:35:49.874067  283957 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:49.874074  283957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:59.888880  283957 system_pods.go:59] 18 kube-system pods found
	I0831 22:35:59.888960  283957 system_pods.go:61] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.888985  283957 system_pods.go:61] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.889010  283957 system_pods.go:61] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.889033  283957 system_pods.go:61] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.889053  283957 system_pods.go:61] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.889074  283957 system_pods.go:61] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.889093  283957 system_pods.go:61] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.889115  283957 system_pods.go:61] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.889134  283957 system_pods.go:61] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.889154  283957 system_pods.go:61] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.889175  283957 system_pods.go:61] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.889195  283957 system_pods.go:61] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.889218  283957 system_pods.go:61] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.889238  283957 system_pods.go:61] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.889260  283957 system_pods.go:61] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.889280  283957 system_pods.go:61] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.889300  283957 system_pods.go:61] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.889321  283957 system_pods.go:61] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.889343  283957 system_pods.go:74] duration metric: took 11.103084876s to wait for pod list to return data ...
	I0831 22:35:59.889364  283957 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:35:59.892759  283957 default_sa.go:45] found service account: "default"
	I0831 22:35:59.892790  283957 default_sa.go:55] duration metric: took 3.404577ms for default service account to be created ...
	I0831 22:35:59.892801  283957 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:35:59.903086  283957 system_pods.go:86] 18 kube-system pods found
	I0831 22:35:59.903124  283957 system_pods.go:89] "coredns-6f6b679f8f-sljbt" [06a33215-e61b-42b2-8530-9e2d768b6a23] Running
	I0831 22:35:59.903134  283957 system_pods.go:89] "csi-hostpath-attacher-0" [b526f874-5e15-4810-bcf9-07f50444c734] Running
	I0831 22:35:59.903139  283957 system_pods.go:89] "csi-hostpath-resizer-0" [492b4def-63d0-41e6-8f33-d77ee6d90893] Running
	I0831 22:35:59.903143  283957 system_pods.go:89] "csi-hostpathplugin-25wkk" [ed567cf4-35bb-4262-b77d-eddfcd36f96f] Running
	I0831 22:35:59.903148  283957 system_pods.go:89] "etcd-addons-926553" [e15b7cec-a13a-4582-ab11-374125bab61d] Running
	I0831 22:35:59.903152  283957 system_pods.go:89] "kindnet-wdlp4" [242e7fe0-de25-4fe8-9782-2cadf1e54e96] Running
	I0831 22:35:59.903157  283957 system_pods.go:89] "kube-apiserver-addons-926553" [0dd9d30a-f426-4944-9893-5f1537844c18] Running
	I0831 22:35:59.903162  283957 system_pods.go:89] "kube-controller-manager-addons-926553" [1ded4cb8-0f32-4a80-86b8-0cd41aef43eb] Running
	I0831 22:35:59.903168  283957 system_pods.go:89] "kube-ingress-dns-minikube" [0e07561b-af16-4df3-8e88-438e733a8930] Running
	I0831 22:35:59.903173  283957 system_pods.go:89] "kube-proxy-2x2mt" [8feaacf8-dae0-4095-966f-966ceed56f36] Running
	I0831 22:35:59.903178  283957 system_pods.go:89] "kube-scheduler-addons-926553" [34db2652-e629-4869-a324-d4aca6527e88] Running
	I0831 22:35:59.903182  283957 system_pods.go:89] "metrics-server-84c5f94fbc-zwvsl" [8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9] Running
	I0831 22:35:59.903191  283957 system_pods.go:89] "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
	I0831 22:35:59.903195  283957 system_pods.go:89] "registry-6fb4cdfc84-bf4pl" [000dc781-4a18-4524-b73a-681e34eaa529] Running
	I0831 22:35:59.903199  283957 system_pods.go:89] "registry-proxy-6dfvf" [f354b100-f3b2-4369-b6de-637de12a35fb] Running
	I0831 22:35:59.903208  283957 system_pods.go:89] "snapshot-controller-56fcc65765-55n8n" [49bef057-02c3-4bcf-8da2-c5fa9980394f] Running
	I0831 22:35:59.903212  283957 system_pods.go:89] "snapshot-controller-56fcc65765-j4sjq" [61dde631-692d-4175-9747-daa00ca99dc7] Running
	I0831 22:35:59.903225  283957 system_pods.go:89] "storage-provisioner" [396f5f2a-755e-492f-a0ac-fa7cb6f31a10] Running
	I0831 22:35:59.903232  283957 system_pods.go:126] duration metric: took 10.425939ms to wait for k8s-apps to be running ...
	I0831 22:35:59.903240  283957 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:35:59.903305  283957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:35:59.914900  283957 system_svc.go:56] duration metric: took 11.64979ms WaitForService to wait for kubelet
	I0831 22:35:59.914930  283957 kubeadm.go:582] duration metric: took 2m31.591852103s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:35:59.914951  283957 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:35:59.918337  283957 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 22:35:59.918373  283957 node_conditions.go:123] node cpu capacity is 2
	I0831 22:35:59.918383  283957 node_conditions.go:105] duration metric: took 3.427642ms to run NodePressure ...
	I0831 22:35:59.918397  283957 start.go:241] waiting for startup goroutines ...
	I0831 22:35:59.918404  283957 start.go:246] waiting for cluster config update ...
	I0831 22:35:59.918419  283957 start.go:255] writing updated cluster config ...
	I0831 22:35:59.918717  283957 ssh_runner.go:195] Run: rm -f paused
	I0831 22:36:00.538015  283957 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:36:00.544227  283957 out.go:177] * Done! kubectl is now configured to use "addons-926553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 22:48:26 addons-926553 crio[969]: time="2024-08-31 22:48:26.050971717Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=98a28696-9c90-48d8-b407-04e44a6dac2a name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:37 addons-926553 crio[969]: time="2024-08-31 22:48:37.050937051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74ccd41b-1653-46a1-bf07-243dcdecfc60 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:37 addons-926553 crio[969]: time="2024-08-31 22:48:37.051181422Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=74ccd41b-1653-46a1-bf07-243dcdecfc60 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:51 addons-926553 crio[969]: time="2024-08-31 22:48:51.051201783Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8bc36fe-3efe-424e-bac1-89c1bafb16b0 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:48:51 addons-926553 crio[969]: time="2024-08-31 22:48:51.051442569Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e8bc36fe-3efe-424e-bac1-89c1bafb16b0 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:02 addons-926553 crio[969]: time="2024-08-31 22:49:02.050792052Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4deab6af-be6c-4272-babc-01c229653516 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:02 addons-926553 crio[969]: time="2024-08-31 22:49:02.051065174Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4deab6af-be6c-4272-babc-01c229653516 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:14 addons-926553 crio[969]: time="2024-08-31 22:49:14.051202295Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbce6fa3-0d55-4f12-b429-b81896dea4ff name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:14 addons-926553 crio[969]: time="2024-08-31 22:49:14.051449333Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dbce6fa3-0d55-4f12-b429-b81896dea4ff name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:27 addons-926553 crio[969]: time="2024-08-31 22:49:27.050783237Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=586ab4eb-4041-4d9e-8db0-a8762ef2cb7d name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:27 addons-926553 crio[969]: time="2024-08-31 22:49:27.051020117Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=586ab4eb-4041-4d9e-8db0-a8762ef2cb7d name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:39 addons-926553 crio[969]: time="2024-08-31 22:49:39.051293654Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f7ad0d3-41c9-461d-9bd3-06639e20765b name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:39 addons-926553 crio[969]: time="2024-08-31 22:49:39.051536081Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7f7ad0d3-41c9-461d-9bd3-06639e20765b name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:53 addons-926553 crio[969]: time="2024-08-31 22:49:53.056616245Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c2b1d496-0322-4906-9189-07b9e084efe3 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:49:53 addons-926553 crio[969]: time="2024-08-31 22:49:53.056881351Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c2b1d496-0322-4906-9189-07b9e084efe3 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:50:08 addons-926553 crio[969]: time="2024-08-31 22:50:08.050777904Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c16cb0be-64ad-45c0-b413-16d8243d3f10 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:50:08 addons-926553 crio[969]: time="2024-08-31 22:50:08.051031999Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c16cb0be-64ad-45c0-b413-16d8243d3f10 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:50:20 addons-926553 crio[969]: time="2024-08-31 22:50:20.051267514Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43aa547c-7f96-47a2-bdaf-de03c0cb8db4 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:50:20 addons-926553 crio[969]: time="2024-08-31 22:50:20.051540308Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=43aa547c-7f96-47a2-bdaf-de03c0cb8db4 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 22:50:23 addons-926553 crio[969]: time="2024-08-31 22:50:23.284574587Z" level=info msg="Stopping container: 1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5 (timeout: 30s)" id=05545086-2bb0-45df-b16d-fd99d1c773e4 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:50:24 addons-926553 crio[969]: time="2024-08-31 22:50:24.459613636Z" level=info msg="Stopped container 1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5: kube-system/metrics-server-84c5f94fbc-zwvsl/metrics-server" id=05545086-2bb0-45df-b16d-fd99d1c773e4 name=/runtime.v1.RuntimeService/StopContainer
	Aug 31 22:50:24 addons-926553 crio[969]: time="2024-08-31 22:50:24.460521289Z" level=info msg="Stopping pod sandbox: 9ffbb41ccd3eb6964491e2b3016dc48b9459ef65822773e76bcd1ddca1400ddd" id=2929326b-0e25-44dd-a8f4-0ca402c2152b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 31 22:50:24 addons-926553 crio[969]: time="2024-08-31 22:50:24.460731888Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-zwvsl Namespace:kube-system ID:9ffbb41ccd3eb6964491e2b3016dc48b9459ef65822773e76bcd1ddca1400ddd UID:8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9 NetNS:/var/run/netns/fc73b264-21ce-442a-8825-1b39456bb6cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 31 22:50:24 addons-926553 crio[969]: time="2024-08-31 22:50:24.460864056Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-zwvsl from CNI network \"kindnet\" (type=ptp)"
	Aug 31 22:50:24 addons-926553 crio[969]: time="2024-08-31 22:50:24.515498374Z" level=info msg="Stopped pod sandbox: 9ffbb41ccd3eb6964491e2b3016dc48b9459ef65822773e76bcd1ddca1400ddd" id=2929326b-0e25-44dd-a8f4-0ca402c2152b name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f63a683509df       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   3f204122dcaee       hello-world-app-55bf9c44b4-9xzr8
	760b8772821dc       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         4 minutes ago       Running             nginx                     0                   616434f938bd3       nginx
	5102df2042c27       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            15 minutes ago      Running             gcp-auth                  0                   c6ce5424649e0       gcp-auth-89d5ffd79-ntcjg
	08c755cbd5fe4       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        15 minutes ago      Running             local-path-provisioner    0                   57dab6b5f6051       local-path-provisioner-86d989889c-5d9bc
	1512f4dc6befd       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   9ffbb41ccd3eb       metrics-server-84c5f94fbc-zwvsl
	d4a4a18a5a7f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   37a8c2f557cde       storage-provisioner
	c0854dd1abcf9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        16 minutes ago      Running             coredns                   0                   c565a0f2f52b8       coredns-6f6b679f8f-sljbt
	7cc064acda755       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      16 minutes ago      Running             kindnet-cni               0                   ba7fb4cc6f892       kindnet-wdlp4
	38638055bfba9       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        16 minutes ago      Running             kube-proxy                0                   2faf839d32f54       kube-proxy-2x2mt
	cc59354075cb7       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        17 minutes ago      Running             kube-controller-manager   0                   9d98609f879af       kube-controller-manager-addons-926553
	a2ceaab8a5e1b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   003527351e2b0       etcd-addons-926553
	29388d95df021       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        17 minutes ago      Running             kube-scheduler            0                   58f6b662812e6       kube-scheduler-addons-926553
	4f3de6a88ca04       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        17 minutes ago      Running             kube-apiserver            0                   fec228035ae32       kube-apiserver-addons-926553
	
	
	==> coredns [c0854dd1abcf9feb03bffa5eabda6ac98742ae97965028f2ed93491deabb0cbb] <==
	[INFO] 10.244.0.14:47403 - 18828 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107133s
	[INFO] 10.244.0.14:60100 - 56608 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008414849s
	[INFO] 10.244.0.14:60100 - 41517 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009713783s
	[INFO] 10.244.0.14:38062 - 19984 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167759s
	[INFO] 10.244.0.14:38062 - 61468 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152022s
	[INFO] 10.244.0.14:56768 - 49550 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000107535s
	[INFO] 10.244.0.14:56768 - 25522 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038949s
	[INFO] 10.244.0.14:36032 - 41173 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087826s
	[INFO] 10.244.0.14:36032 - 21969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059166s
	[INFO] 10.244.0.14:57338 - 29619 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121532s
	[INFO] 10.244.0.14:57338 - 61873 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038046s
	[INFO] 10.244.0.14:56027 - 58740 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002244404s
	[INFO] 10.244.0.14:56027 - 1643 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002177787s
	[INFO] 10.244.0.14:36047 - 49336 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069111s
	[INFO] 10.244.0.14:36047 - 12732 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192186s
	[INFO] 10.244.0.19:60080 - 19976 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207808s
	[INFO] 10.244.0.19:44795 - 23051 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118792s
	[INFO] 10.244.0.19:45334 - 37804 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198151s
	[INFO] 10.244.0.19:49736 - 43423 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110488s
	[INFO] 10.244.0.19:60561 - 60650 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127867s
	[INFO] 10.244.0.19:55452 - 41864 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097204s
	[INFO] 10.244.0.19:54221 - 39065 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002445188s
	[INFO] 10.244.0.19:53320 - 41026 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00209399s
	[INFO] 10.244.0.19:57162 - 45093 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001822174s
	[INFO] 10.244.0.19:34360 - 14218 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002709595s
	
	
	==> describe nodes <==
	Name:               addons-926553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-926553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-926553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_33_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-926553
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:33:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-926553
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:50:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:48:32 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:48:32 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:48:32 +0000   Sat, 31 Aug 2024 22:33:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:48:32 +0000   Sat, 31 Aug 2024 22:34:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-926553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4c4652ff78a412da204ff6653859615
	  System UUID:                a9959b90-2ddc-4599-b12a-adb3653f0cc6
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-9xzr8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  gcp-auth                    gcp-auth-89d5ffd79-ntcjg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-6f6b679f8f-sljbt                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-addons-926553                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-wdlp4                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-926553               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-926553      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-2x2mt                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-926553               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-5d9bc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node addons-926553 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node addons-926553 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node addons-926553 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node addons-926553 event: Registered Node addons-926553 in Controller
	  Normal   NodeReady                16m                kubelet          Node addons-926553 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug31 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014722] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471263] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.854339] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621095] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 21:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug31 21:36] hrtimer: interrupt took 85633258 ns
	
	
	==> etcd [a2ceaab8a5e1bc7606c7881d619c6780cb545e21e8caff58daa486171143d678] <==
	{"level":"info","ts":"2024-08-31T22:33:33.530997Z","caller":"traceutil/trace.go:171","msg":"trace[234719321] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"135.133009ms","start":"2024-08-31T22:33:33.395850Z","end":"2024-08-31T22:33:33.530983Z","steps":["trace[234719321] 'process raft request'  (duration: 127.947447ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.575244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T22:33:33.566193Z","caller":"traceutil/trace.go:171","msg":"trace[595826488] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:417; }","duration":"306.912441ms","start":"2024-08-31T22:33:33.259250Z","end":"2024-08-31T22:33:33.566162Z","steps":["trace[595826488] 'agreement among raft nodes before linearized reading'  (duration: 272.568401ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.573983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259231Z","time spent":"314.696303ms","remote":"127.0.0.1:50728","response type":"/etcdserverpb.KV/Range","request count":0,"request size":25,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T22:33:33.531910Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.516901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3145"}
	{"level":"info","ts":"2024-08-31T22:33:33.577122Z","caller":"traceutil/trace.go:171","msg":"trace[1467882144] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:417; }","duration":"164.71578ms","start":"2024-08-31T22:33:33.412390Z","end":"2024-08-31T22:33:33.577105Z","steps":["trace[1467882144] 'agreement among raft nodes before linearized reading'  (duration: 119.483424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.531929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.582255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.333652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.597156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.531992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.932034ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-31T22:33:33.532029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.89155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2024-08-31T22:33:33.577617Z","caller":"traceutil/trace.go:171","msg":"trace[133143656] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:417; }","duration":"165.263282ms","start":"2024-08-31T22:33:33.412344Z","end":"2024-08-31T22:33:33.577607Z","steps":["trace[133143656] 'agreement among raft nodes before linearized reading'  (duration: 119.576076ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577692Z","caller":"traceutil/trace.go:171","msg":"trace[701626801] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:0; response_revision:417; }","duration":"182.073297ms","start":"2024-08-31T22:33:33.395612Z","end":"2024-08-31T22:33:33.577685Z","steps":["trace[701626801] 'agreement among raft nodes before linearized reading'  (duration: 136.325628ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-31T22:33:33.577710Z","caller":"traceutil/trace.go:171","msg":"trace[617058299] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:417; }","duration":"317.35042ms","start":"2024-08-31T22:33:33.260355Z","end":"2024-08-31T22:33:33.577705Z","steps":["trace[617058299] 'agreement among raft nodes before linearized reading'  (duration: 271.603752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.609326Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.260279Z","time spent":"349.011862ms","remote":"127.0.0.1:50662","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":29,"request content":"key:\"/registry/resourcequotas\" limit:1 "}
	{"level":"info","ts":"2024-08-31T22:33:33.577966Z","caller":"traceutil/trace.go:171","msg":"trace[1867680583] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:417; }","duration":"318.901298ms","start":"2024-08-31T22:33:33.259056Z","end":"2024-08-31T22:33:33.577957Z","steps":["trace[1867680583] 'agreement among raft nodes before linearized reading'  (duration: 272.92616ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.610229Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259019Z","time spent":"351.194926ms","remote":"127.0.0.1:50942","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" "}
	{"level":"info","ts":"2024-08-31T22:33:33.577987Z","caller":"traceutil/trace.go:171","msg":"trace[1222259269] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:417; }","duration":"318.848532ms","start":"2024-08-31T22:33:33.259134Z","end":"2024-08-31T22:33:33.577983Z","steps":["trace[1222259269] 'agreement among raft nodes before linearized reading'  (duration: 272.866747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T22:33:33.614870Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T22:33:33.259122Z","time spent":"355.723597ms","remote":"127.0.0.1:51030","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":3375,"request content":"key:\"/registry/deployments/kube-system/registry\" "}
	{"level":"info","ts":"2024-08-31T22:43:18.076737Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1540}
	{"level":"info","ts":"2024-08-31T22:43:18.119550Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1540,"took":"42.327277ms","hash":1500695898,"current-db-size-bytes":6250496,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3358720,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-08-31T22:43:18.119615Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1500695898,"revision":1540,"compact-revision":-1}
	{"level":"info","ts":"2024-08-31T22:48:18.085160Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1957}
	{"level":"info","ts":"2024-08-31T22:48:18.103755Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1957,"took":"18.024409ms","hash":4157701435,"current-db-size-bytes":6250496,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4734976,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-08-31T22:48:18.103814Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4157701435,"revision":1957,"compact-revision":1540}
	
	
	==> gcp-auth [5102df2042c274c3bdda768e34fef45be4cf3338060a3b3ca18b308ef802a5b7] <==
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:36:01 Ready to marshal response ...
	2024/08/31 22:36:01 Ready to write response ...
	2024/08/31 22:44:06 Ready to marshal response ...
	2024/08/31 22:44:06 Ready to write response ...
	2024/08/31 22:44:14 Ready to marshal response ...
	2024/08/31 22:44:14 Ready to write response ...
	2024/08/31 22:44:27 Ready to marshal response ...
	2024/08/31 22:44:27 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:02 Ready to marshal response ...
	2024/08/31 22:45:02 Ready to write response ...
	2024/08/31 22:45:10 Ready to marshal response ...
	2024/08/31 22:45:10 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:18 Ready to marshal response ...
	2024/08/31 22:45:18 Ready to write response ...
	2024/08/31 22:45:48 Ready to marshal response ...
	2024/08/31 22:45:48 Ready to write response ...
	2024/08/31 22:48:07 Ready to marshal response ...
	2024/08/31 22:48:07 Ready to write response ...
	
	
	==> kernel <==
	 22:50:24 up  2:32,  0 users,  load average: 1.16, 0.55, 1.19
	Linux addons-926553 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [7cc064acda7550da077bdd80c13717d71c46647be789f3caf46541a24ab8e171] <==
	I0831 22:48:24.652213       1 main.go:299] handling current node
	I0831 22:48:34.650448       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:48:34.650570       1 main.go:299] handling current node
	I0831 22:48:44.649555       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:48:44.649602       1 main.go:299] handling current node
	I0831 22:48:54.651134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:48:54.651171       1 main.go:299] handling current node
	I0831 22:49:04.652452       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:04.652497       1 main.go:299] handling current node
	I0831 22:49:14.656529       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:14.656563       1 main.go:299] handling current node
	I0831 22:49:24.649569       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:24.649684       1 main.go:299] handling current node
	I0831 22:49:34.650341       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:34.650376       1 main.go:299] handling current node
	I0831 22:49:44.649569       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:44.649601       1 main.go:299] handling current node
	I0831 22:49:54.651518       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:49:54.651568       1 main.go:299] handling current node
	I0831 22:50:04.649599       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:50:04.649743       1 main.go:299] handling current node
	I0831 22:50:14.652069       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:50:14.652102       1 main.go:299] handling current node
	I0831 22:50:24.658301       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 22:50:24.658352       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4f3de6a88ca0467d55e6ee2cbf2537869d2b51007bd7ecbcf85a9a40bad881f7] <==
	E0831 22:35:25.357143       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0831 22:35:25.405475       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0831 22:44:19.100699       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:44:43.651018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.651160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.684644       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.684781       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.702517       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.702581       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.707833       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.707886       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:44:43.743226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:44:43.743276       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:44:44.708480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:44:44.744290       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0831 22:44:44.836280       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0831 22:45:18.780004       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.178.150"}
	I0831 22:45:42.410885       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0831 22:45:43.451889       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0831 22:45:47.980738       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0831 22:45:48.341429       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.56.133"}
	I0831 22:48:07.301950       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.239.134"}
	
	
	==> kube-controller-manager [cc59354075cb75155b972be5a293583121144b9130536ca5c9eedd9701ab2ab0] <==
	E0831 22:48:20.907777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:48:32.447528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-926553"
	W0831 22:48:43.728651       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:48:43.728694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:48:45.174080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:48:45.174233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:48:52.979413       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:48:52.979468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:49:12.422690       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:49:12.422734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:49:19.523700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:49:19.523743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:49:28.868624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:49:28.868669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:49:39.248202       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:49:39.248245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:49:58.578716       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:49:58.579353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:50:05.285650       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:50:05.285694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:50:05.502059       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:50:05.502112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:50:16.150819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:50:16.150866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:50:23.245663       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="5.382µs"
	
	
	==> kube-proxy [38638055bfba9883590e864b9fe26e72ea3251b450a2bb2a41567b8b3fa6ae87] <==
	I0831 22:33:33.909772       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:33:34.876166       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:33:34.876653       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:33:35.043499       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:33:35.050030       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:33:35.104068       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:33:35.104588       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:33:35.104890       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:33:35.106274       1 config.go:197] "Starting service config controller"
	I0831 22:33:35.106395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:33:35.106464       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:33:35.106494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:33:35.107280       1 config.go:326] "Starting node config controller"
	I0831 22:33:35.107354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:33:35.222348       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:33:35.222470       1 shared_informer.go:320] Caches are synced for service config
	I0831 22:33:35.222534       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [29388d95df0212550a0cec38543d149d769d539a1b1404295a786148fde3572b] <==
	W0831 22:33:20.578962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:20.578977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579019       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 22:33:20.579037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:20.579097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579189       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:33:20.579211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:33:20.579296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.579337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:20.579353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:20.584824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:33:20.584869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.398071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:33:21.398208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.413716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.413827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.497136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:33:21.497258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.589583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:33:21.589719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:33:21.860482       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:33:21.860528       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0831 22:33:24.865187       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:49:43 addons-926553 kubelet[1497]: E0831 22:49:43.361223    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144583360871449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:49:43 addons-926553 kubelet[1497]: E0831 22:49:43.361262    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144583360871449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:49:53 addons-926553 kubelet[1497]: E0831 22:49:53.058853    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5722af42-82b3-4bf5-a07f-92ee5dd87a84"
	Aug 31 22:49:53 addons-926553 kubelet[1497]: E0831 22:49:53.364433    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144593364149404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:49:53 addons-926553 kubelet[1497]: E0831 22:49:53.364469    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144593364149404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:03 addons-926553 kubelet[1497]: E0831 22:50:03.367370    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144603367118257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:03 addons-926553 kubelet[1497]: E0831 22:50:03.367414    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144603367118257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:08 addons-926553 kubelet[1497]: E0831 22:50:08.051350    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5722af42-82b3-4bf5-a07f-92ee5dd87a84"
	Aug 31 22:50:13 addons-926553 kubelet[1497]: E0831 22:50:13.370330    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144613370025204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:13 addons-926553 kubelet[1497]: E0831 22:50:13.370376    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144613370025204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:20 addons-926553 kubelet[1497]: E0831 22:50:20.052888    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="5722af42-82b3-4bf5-a07f-92ee5dd87a84"
	Aug 31 22:50:23 addons-926553 kubelet[1497]: I0831 22:50:23.283566    1497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-9xzr8" podStartSLOduration=135.09582959 podStartE2EDuration="2m16.28354805s" podCreationTimestamp="2024-08-31 22:48:07 +0000 UTC" firstStartedPulling="2024-08-31 22:48:07.438767499 +0000 UTC m=+884.515567975" lastFinishedPulling="2024-08-31 22:48:08.626485959 +0000 UTC m=+885.703286435" observedRunningTime="2024-08-31 22:48:09.372108688 +0000 UTC m=+886.448909172" watchObservedRunningTime="2024-08-31 22:50:23.28354805 +0000 UTC m=+1020.360348534"
	Aug 31 22:50:23 addons-926553 kubelet[1497]: E0831 22:50:23.375871    1497 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144623374382670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:23 addons-926553 kubelet[1497]: E0831 22:50:23.375914    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725144623374382670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582441,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.554723    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-tmp-dir\") pod \"8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9\" (UID: \"8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9\") "
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.554783    1497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8wc7\" (UniqueName: \"kubernetes.io/projected/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-kube-api-access-x8wc7\") pod \"8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9\" (UID: \"8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9\") "
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.555360    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9" (UID: "8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.560360    1497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-kube-api-access-x8wc7" (OuterVolumeSpecName: "kube-api-access-x8wc7") pod "8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9" (UID: "8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9"). InnerVolumeSpecName "kube-api-access-x8wc7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.619039    1497 scope.go:117] "RemoveContainer" containerID="1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5"
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.657213    1497 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-tmp-dir\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.657249    1497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x8wc7\" (UniqueName: \"kubernetes.io/projected/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9-kube-api-access-x8wc7\") on node \"addons-926553\" DevicePath \"\""
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.670874    1497 scope.go:117] "RemoveContainer" containerID="1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5"
	Aug 31 22:50:24 addons-926553 kubelet[1497]: E0831 22:50:24.671292    1497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5\": container with ID starting with 1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5 not found: ID does not exist" containerID="1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5"
	Aug 31 22:50:24 addons-926553 kubelet[1497]: I0831 22:50:24.671329    1497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5"} err="failed to get container status \"1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5\": rpc error: code = NotFound desc = could not find container \"1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5\": container with ID starting with 1512f4dc6befdd0bb0fbbf7a635956f65430c56cf871e38d6c0da5a0f3c101a5 not found: ID does not exist"
	Aug 31 22:50:25 addons-926553 kubelet[1497]: I0831 22:50:25.052157    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9" path="/var/lib/kubelet/pods/8f41beaa-bd4a-4b1b-957e-d5fcd9f7aba9/volumes"
	
	
	==> storage-provisioner [d4a4a18a5a7f6d6b98241bc922d29ac28c4b9779e5a615453b66ea70509523e8] <==
	I0831 22:34:15.733314       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:34:15.907321       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:34:15.907562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:34:16.042095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:34:16.048020       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	I0831 22:34:16.060065       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d1090045-d7c1-4b36-83f3-943893f1aa8d", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0 became leader
	I0831 22:34:16.149026       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-926553_0312c23a-1da9-4f4f-853d-3c7ebaecd6a0!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-926553 -n addons-926553
helpers_test.go:262: (dbg) Run:  kubectl --context addons-926553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-926553 describe pod busybox
helpers_test.go:283: (dbg) kubectl --context addons-926553 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-926553/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:36:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-npklh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-npklh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-926553
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x41 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:286: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (306.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 node delete m03 -v=7 --alsologtostderr: (11.398763258s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:516: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-330867       NotReady   control-plane   8m1s    v1.31.0
	ha-330867-m02   Ready      control-plane   7m33s   v1.31.0
	ha-330867-m04   Ready      <none>          5m5s    v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:524: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect ha-330867
helpers_test.go:236: (dbg) docker inspect ha-330867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192",
	        "Created": "2024-08-31T22:54:59.324706066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T23:00:20.900221727Z",
	            "FinishedAt": "2024-08-31T23:00:20.052303294Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/hostname",
	        "HostsPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/hosts",
	        "LogPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192-json.log",
	        "Name": "/ha-330867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-330867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-330867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b-init/diff:/var/lib/docker/overlay2/b65bd3df822a42b081e949f262147909a06a528615f1ebee5ca341285d3e7159/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-330867",
	                "Source": "/var/lib/docker/volumes/ha-330867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-330867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-330867",
	                "name.minikube.sigs.k8s.io": "ha-330867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9c0e70d8692c1e59b3b7515efe194b2ec5721f98790370de5f35e30ee9b201b1",
	            "SandboxKey": "/var/run/docker/netns/9c0e70d8692c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-330867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "84f98643c6f36600558d56174c7006f409ffd5e61fb741f838ba34e8937fb59a",
	                    "EndpointID": "6957c058c84930833b3e1d9fd616439efc8ed842ef0be54296cf4cb37f871f0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-330867",
	                        "db44dca62049"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-330867 -n ha-330867
helpers_test.go:245: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 logs -n 25: (2.315364801s)
helpers_test.go:253: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m02 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m03_ha-330867-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04:/home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m04 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-330867 cp testdata/cp-test.txt                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867:/home/docker/cp-test_ha-330867-m04_ha-330867.txt                       |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867 sudo cat                                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867.txt                                 |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m02:/home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m02 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m03:/home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m03 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-330867 node stop m02 -v=7                                                     | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-330867 node start m02 -v=7                                                    | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC | 31 Aug 24 22:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-330867 -v=7                                                           | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-330867 -v=7                                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC | 31 Aug 24 23:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-330867 --wait=true -v=7                                                    | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:00 UTC | 31 Aug 24 23:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-330867                                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| node    | ha-330867 node delete m03 -v=7                                                   | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:00:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:00:20.420347  333222 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:00:20.420553  333222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:00:20.420566  333222 out.go:358] Setting ErrFile to fd 2...
	I0831 23:00:20.420571  333222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:00:20.420843  333222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:00:20.421254  333222 out.go:352] Setting JSON to false
	I0831 23:00:20.422180  333222 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9769,"bootTime":1725135452,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 23:00:20.422252  333222 start.go:139] virtualization:  
	I0831 23:00:20.426400  333222 out.go:177] * [ha-330867] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:00:20.430111  333222 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:00:20.430284  333222 notify.go:220] Checking for updates...
	I0831 23:00:20.435607  333222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:00:20.438173  333222 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:00:20.440787  333222 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 23:00:20.443301  333222 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:00:20.445840  333222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:00:20.448866  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:20.449009  333222 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:00:20.481338  333222 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:00:20.481493  333222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:00:20.536630  333222 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-31 23:00:20.526894452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:00:20.536738  333222 docker.go:307] overlay module found
	I0831 23:00:20.541718  333222 out.go:177] * Using the docker driver based on existing profile
	I0831 23:00:20.544085  333222 start.go:297] selected driver: docker
	I0831 23:00:20.544106  333222 start.go:901] validating driver "docker" against &{Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:00:20.544264  333222 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:00:20.544387  333222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:00:20.599783  333222 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-31 23:00:20.590082241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:00:20.600230  333222 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:00:20.600256  333222 cni.go:84] Creating CNI manager for ""
	I0831 23:00:20.600266  333222 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 23:00:20.600319  333222 start.go:340] cluster config:
	{Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:00:20.603420  333222 out.go:177] * Starting "ha-330867" primary control-plane node in "ha-330867" cluster
	I0831 23:00:20.606191  333222 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:00:20.608909  333222 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:00:20.611773  333222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:00:20.611767  333222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:00:20.611873  333222 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 23:00:20.611883  333222 cache.go:56] Caching tarball of preloaded images
	I0831 23:00:20.611972  333222 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:00:20.611980  333222 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:00:20.612145  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:00:20.631678  333222 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:00:20.631699  333222 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:00:20.631790  333222 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:00:20.631812  333222 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:00:20.631820  333222 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:00:20.631829  333222 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:00:20.631835  333222 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:00:20.633273  333222 image.go:273] response: 
	I0831 23:00:20.756391  333222 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:00:20.756451  333222 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:00:20.756501  333222 start.go:360] acquireMachinesLock for ha-330867: {Name:mk05480d63e8159586921c755402190e3148136c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:00:20.756577  333222 start.go:364] duration metric: took 48.221µs to acquireMachinesLock for "ha-330867"
	I0831 23:00:20.756605  333222 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:00:20.756614  333222 fix.go:54] fixHost starting: 
	I0831 23:00:20.756896  333222 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:00:20.774165  333222 fix.go:112] recreateIfNeeded on ha-330867: state=Stopped err=<nil>
	W0831 23:00:20.774197  333222 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:00:20.777181  333222 out.go:177] * Restarting existing docker container for "ha-330867" ...
	I0831 23:00:20.779934  333222 cli_runner.go:164] Run: docker start ha-330867
	I0831 23:00:21.058701  333222 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:00:21.080703  333222 kic.go:435] container "ha-330867" state is running.
	I0831 23:00:21.081155  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:00:21.108184  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:00:21.108722  333222 machine.go:93] provisionDockerMachine start ...
	I0831 23:00:21.108814  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:21.129003  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:21.129512  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I0831 23:00:21.129534  333222 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:00:21.130193  333222 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 23:00:24.264108  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867
	
	I0831 23:00:24.264151  333222 ubuntu.go:169] provisioning hostname "ha-330867"
	I0831 23:00:24.264216  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:24.281497  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:24.281763  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I0831 23:00:24.281781  333222 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867 && echo "ha-330867" | sudo tee /etc/hostname
	I0831 23:00:24.428633  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867
	
	I0831 23:00:24.428753  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:24.446799  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:24.447064  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I0831 23:00:24.447088  333222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:00:24.580394  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:00:24.580434  333222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:00:24.580465  333222 ubuntu.go:177] setting up certificates
	I0831 23:00:24.580475  333222 provision.go:84] configureAuth start
	I0831 23:00:24.580537  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:00:24.596778  333222 provision.go:143] copyHostCerts
	I0831 23:00:24.596826  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:00:24.596863  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:00:24.596873  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:00:24.596949  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:00:24.597042  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:00:24.597065  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:00:24.597074  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:00:24.597101  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:00:24.597153  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:00:24.597177  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:00:24.597187  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:00:24.597212  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:00:24.597314  333222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867 san=[127.0.0.1 192.168.49.2 ha-330867 localhost minikube]
	I0831 23:00:24.940742  333222 provision.go:177] copyRemoteCerts
	I0831 23:00:24.940819  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:00:24.940863  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:24.957682  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:25.055735  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:00:25.055806  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:00:25.087475  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:00:25.087561  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 23:00:25.116844  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:00:25.116970  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:00:25.151006  333222 provision.go:87] duration metric: took 570.515135ms to configureAuth
	I0831 23:00:25.151075  333222 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:00:25.151339  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:25.151457  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:25.169333  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:25.169594  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33173 <nil> <nil>}
	I0831 23:00:25.169616  333222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:00:25.619162  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:00:25.619289  333222 machine.go:96] duration metric: took 4.510549887s to provisionDockerMachine
	I0831 23:00:25.619332  333222 start.go:293] postStartSetup for "ha-330867" (driver="docker")
	I0831 23:00:25.619388  333222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:00:25.619514  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:00:25.619614  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:25.638456  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:25.741887  333222 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:00:25.745509  333222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:00:25.745580  333222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:00:25.745601  333222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:00:25.745608  333222 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:00:25.745647  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:00:25.745721  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:00:25.745810  333222 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:00:25.745823  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:00:25.745925  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:00:25.755362  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:00:25.782541  333222 start.go:296] duration metric: took 163.154014ms for postStartSetup
	I0831 23:00:25.782658  333222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:00:25.782746  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:25.803612  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:25.897544  333222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:00:25.902124  333222 fix.go:56] duration metric: took 5.145503634s for fixHost
	I0831 23:00:25.902151  333222 start.go:83] releasing machines lock for "ha-330867", held for 5.145558961s
	I0831 23:00:25.902221  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:00:25.918353  333222 ssh_runner.go:195] Run: cat /version.json
	I0831 23:00:25.918417  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:25.918651  333222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:00:25.918707  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:25.934968  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:25.942006  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:26.029848  333222 ssh_runner.go:195] Run: systemctl --version
	I0831 23:00:26.171523  333222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:00:26.312010  333222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:00:26.316467  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:00:26.325188  333222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:00:26.325276  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:00:26.334364  333222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:00:26.334434  333222 start.go:495] detecting cgroup driver to use...
	I0831 23:00:26.334485  333222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:00:26.334556  333222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:00:26.347095  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:00:26.358883  333222 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:00:26.358950  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:00:26.371918  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:00:26.383538  333222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:00:26.471238  333222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:00:26.563262  333222 docker.go:233] disabling docker service ...
	I0831 23:00:26.563384  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:00:26.576180  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:00:26.588282  333222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:00:26.670799  333222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:00:26.758824  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:00:26.771513  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:00:26.788508  333222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:00:26.788622  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.799031  333222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:00:26.799164  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.809542  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.819635  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.829697  333222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:00:26.839032  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.849250  333222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.859226  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:26.868828  333222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:00:26.877335  333222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:00:26.885926  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:00:26.974860  333222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:00:27.105571  333222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:00:27.105652  333222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:00:27.109739  333222 start.go:563] Will wait 60s for crictl version
	I0831 23:00:27.109806  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:00:27.113439  333222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:00:27.160430  333222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:00:27.160587  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:00:27.203790  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:00:27.244633  333222 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:00:27.247137  333222 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:00:27.263926  333222 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:00:27.267670  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:00:27.279030  333222 kubeadm.go:883] updating cluster {Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:00:27.279185  333222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:00:27.279246  333222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:00:27.325464  333222 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:00:27.325490  333222 crio.go:433] Images already preloaded, skipping extraction
	I0831 23:00:27.325554  333222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:00:27.364258  333222 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:00:27.364281  333222 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:00:27.364294  333222 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0831 23:00:27.364468  333222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:00:27.364571  333222 ssh_runner.go:195] Run: crio config
	I0831 23:00:27.422677  333222 cni.go:84] Creating CNI manager for ""
	I0831 23:00:27.422700  333222 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0831 23:00:27.422710  333222 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:00:27.422733  333222 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-330867 NodeName:ha-330867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 23:00:27.422885  333222 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-330867"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:00:27.422905  333222 kube-vip.go:115] generating kube-vip config ...
	I0831 23:00:27.422958  333222 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0831 23:00:27.435819  333222 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 23:00:27.435934  333222 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 23:00:27.435994  333222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:00:27.444589  333222 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:00:27.444662  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 23:00:27.453833  333222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0831 23:00:27.472334  333222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:00:27.491272  333222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0831 23:00:27.510338  333222 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 23:00:27.529762  333222 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:00:27.533583  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:00:27.544634  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:00:27.628887  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:00:27.642713  333222 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.2
	I0831 23:00:27.642736  333222 certs.go:194] generating shared ca certs ...
	I0831 23:00:27.642753  333222 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:27.642890  333222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:00:27.642939  333222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:00:27.642950  333222 certs.go:256] generating profile certs ...
	I0831 23:00:27.643029  333222 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key
	I0831 23:00:27.643062  333222 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.e634490b
	I0831 23:00:27.643081  333222 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.e634490b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0831 23:00:27.944844  333222 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.e634490b ...
	I0831 23:00:27.944880  333222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.e634490b: {Name:mk86a457dffdfc23a518445b685072e68b6583fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:27.945087  333222 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.e634490b ...
	I0831 23:00:27.945101  333222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.e634490b: {Name:mk8911c06dc6d49b31fd37d745629fa3e5aefd8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:27.945194  333222 certs.go:381] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.e634490b -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt
	I0831 23:00:27.945349  333222 certs.go:385] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.e634490b -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key
	I0831 23:00:27.945494  333222 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key
	I0831 23:00:27.945514  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:00:27.945530  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:00:27.945548  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:00:27.945566  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:00:27.945581  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:00:27.945598  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:00:27.945613  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:00:27.945629  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:00:27.945685  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:00:27.945717  333222 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:00:27.945730  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:00:27.945757  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:00:27.945786  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:00:27.945813  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:00:27.945876  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:00:27.945916  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:00:27.945931  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:00:27.945953  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:27.946624  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:00:27.974038  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:00:28.000879  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:00:28.031197  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:00:28.061229  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 23:00:28.087442  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:00:28.112967  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:00:28.137878  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:00:28.164599  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:00:28.191642  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:00:28.216516  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:00:28.240705  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:00:28.259787  333222 ssh_runner.go:195] Run: openssl version
	I0831 23:00:28.265662  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:00:28.275116  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:00:28.278686  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:00:28.278757  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:00:28.285955  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:00:28.294965  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:00:28.304282  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:00:28.307650  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:00:28.307716  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:00:28.314762  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:00:28.323746  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:00:28.333451  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:28.337190  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:28.337258  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:28.344258  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:00:28.353724  333222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:00:28.357446  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:00:28.364590  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:00:28.371631  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:00:28.378725  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:00:28.386402  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:00:28.393485  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:00:28.400515  333222 kubeadm.go:392] StartCluster: {Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:00:28.400647  333222 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:00:28.400729  333222 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:00:28.443723  333222 cri.go:89] found id: ""
	I0831 23:00:28.443792  333222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 23:00:28.452755  333222 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 23:00:28.452829  333222 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 23:00:28.452913  333222 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 23:00:28.462105  333222 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 23:00:28.462576  333222 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-330867" does not appear in /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:00:28.462688  333222 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-277799/kubeconfig needs updating (will repair): [kubeconfig missing "ha-330867" cluster setting kubeconfig missing "ha-330867" context setting]
	I0831 23:00:28.463003  333222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:28.463421  333222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:00:28.463676  333222 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 23:00:28.464325  333222 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 23:00:28.464542  333222 cert_rotation.go:140] Starting client certificate rotation controller
	I0831 23:00:28.474512  333222 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0831 23:00:28.474540  333222 kubeadm.go:597] duration metric: took 21.696723ms to restartPrimaryControlPlane
	I0831 23:00:28.474550  333222 kubeadm.go:394] duration metric: took 74.045804ms to StartCluster
	I0831 23:00:28.474567  333222 settings.go:142] acquiring lock: {Name:mkadbc7d53c5858a38d57ec152e52037ebee242b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:28.474630  333222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:00:28.475210  333222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:28.475398  333222 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:00:28.475423  333222 start.go:241] waiting for startup goroutines ...
	I0831 23:00:28.475436  333222 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 23:00:28.475920  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:28.480916  333222 out.go:177] * Enabled addons: 
	I0831 23:00:28.483282  333222 addons.go:510] duration metric: took 7.844994ms for enable addons: enabled=[]
	I0831 23:00:28.483326  333222 start.go:246] waiting for cluster config update ...
	I0831 23:00:28.483335  333222 start.go:255] writing updated cluster config ...
	I0831 23:00:28.485948  333222 out.go:201] 
	I0831 23:00:28.488625  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:28.488771  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:00:28.491545  333222 out.go:177] * Starting "ha-330867-m02" control-plane node in "ha-330867" cluster
	I0831 23:00:28.493954  333222 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:00:28.496481  333222 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:00:28.498945  333222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:00:28.498967  333222 cache.go:56] Caching tarball of preloaded images
	I0831 23:00:28.498998  333222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:00:28.499060  333222 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:00:28.499075  333222 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:00:28.499198  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:00:28.517109  333222 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:00:28.517132  333222 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:00:28.517196  333222 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:00:28.517220  333222 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:00:28.517230  333222 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:00:28.517239  333222 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:00:28.517252  333222 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:00:28.518388  333222 image.go:273] response: 
	I0831 23:00:28.642453  333222 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:00:28.642493  333222 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:00:28.642530  333222 start.go:360] acquireMachinesLock for ha-330867-m02: {Name:mk1b868483094d3fb1d98465dcb37de63a18b6cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:00:28.642596  333222 start.go:364] duration metric: took 45.579µs to acquireMachinesLock for "ha-330867-m02"
	I0831 23:00:28.642624  333222 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:00:28.642631  333222 fix.go:54] fixHost starting: m02
	I0831 23:00:28.642898  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 23:00:28.658792  333222 fix.go:112] recreateIfNeeded on ha-330867-m02: state=Stopped err=<nil>
	W0831 23:00:28.658823  333222 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:00:28.661885  333222 out.go:177] * Restarting existing docker container for "ha-330867-m02" ...
	I0831 23:00:28.664514  333222 cli_runner.go:164] Run: docker start ha-330867-m02
	I0831 23:00:28.952352  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 23:00:28.979045  333222 kic.go:435] container "ha-330867-m02" state is running.
	I0831 23:00:28.979619  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:00:29.002692  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:00:29.002946  333222 machine.go:93] provisionDockerMachine start ...
	I0831 23:00:29.003052  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:29.027199  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:29.027651  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0831 23:00:29.027669  333222 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:00:29.028984  333222 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43576->127.0.0.1:33178: read: connection reset by peer
	I0831 23:00:32.222605  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m02
	
	I0831 23:00:32.222681  333222 ubuntu.go:169] provisioning hostname "ha-330867-m02"
	I0831 23:00:32.222763  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:32.254095  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:32.254334  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0831 23:00:32.254345  333222 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867-m02 && echo "ha-330867-m02" | sudo tee /etc/hostname
	I0831 23:00:32.471498  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m02
	
	I0831 23:00:32.471714  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:32.516726  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:32.517075  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0831 23:00:32.517097  333222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:00:32.725854  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:00:32.725891  333222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:00:32.725934  333222 ubuntu.go:177] setting up certificates
	I0831 23:00:32.725968  333222 provision.go:84] configureAuth start
	I0831 23:00:32.726087  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:00:32.760611  333222 provision.go:143] copyHostCerts
	I0831 23:00:32.760661  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:00:32.760698  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:00:32.760711  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:00:32.760792  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:00:32.760878  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:00:32.760901  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:00:32.760909  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:00:32.760936  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:00:32.760987  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:00:32.761009  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:00:32.761017  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:00:32.761043  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:00:32.761098  333222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867-m02 san=[127.0.0.1 192.168.49.3 ha-330867-m02 localhost minikube]
	I0831 23:00:33.280106  333222 provision.go:177] copyRemoteCerts
	I0831 23:00:33.280199  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:00:33.280268  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:33.298662  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:00:33.415335  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:00:33.415399  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:00:33.494351  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:00:33.494417  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 23:00:33.556134  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:00:33.556210  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:00:33.622665  333222 provision.go:87] duration metric: took 896.667513ms to configureAuth
	I0831 23:00:33.622698  333222 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:00:33.622983  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:33.623113  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:33.669455  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:00:33.669702  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33178 <nil> <nil>}
	I0831 23:00:33.669717  333222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:00:34.153048  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:00:34.153076  333222 machine.go:96] duration metric: took 5.15011944s to provisionDockerMachine
	I0831 23:00:34.153087  333222 start.go:293] postStartSetup for "ha-330867-m02" (driver="docker")
	I0831 23:00:34.153099  333222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:00:34.153162  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:00:34.153213  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:34.170448  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:00:34.270474  333222 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:00:34.274322  333222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:00:34.274362  333222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:00:34.274374  333222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:00:34.274381  333222 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:00:34.274392  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:00:34.274454  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:00:34.274533  333222 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:00:34.274545  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:00:34.274648  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:00:34.287224  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:00:34.321726  333222 start.go:296] duration metric: took 168.623942ms for postStartSetup
	I0831 23:00:34.321812  333222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:00:34.321858  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:34.345603  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:00:34.437930  333222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:00:34.445527  333222 fix.go:56] duration metric: took 5.802888141s for fixHost
	I0831 23:00:34.445554  333222 start.go:83] releasing machines lock for "ha-330867-m02", held for 5.802942073s
	I0831 23:00:34.445624  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:00:34.475438  333222 out.go:177] * Found network options:
	I0831 23:00:34.478083  333222 out.go:177]   - NO_PROXY=192.168.49.2
	W0831 23:00:34.480742  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:00:34.480794  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 23:00:34.480861  333222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:00:34.480912  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:34.481177  333222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:00:34.481238  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:00:34.520556  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:00:34.521215  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:00:34.872334  333222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:00:34.898381  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:00:34.926141  333222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:00:34.926269  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:00:34.940706  333222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:00:34.940785  333222 start.go:495] detecting cgroup driver to use...
	I0831 23:00:34.940834  333222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:00:34.940912  333222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:00:34.984807  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:00:35.005715  333222 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:00:35.005884  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:00:35.027022  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:00:35.055942  333222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:00:35.610917  333222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:00:35.915907  333222 docker.go:233] disabling docker service ...
	I0831 23:00:35.916029  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:00:35.975575  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:00:36.027745  333222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:00:36.327376  333222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:00:36.597790  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:00:36.658435  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:00:36.748388  333222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:00:36.748516  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:36.795995  333222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:00:36.796112  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:36.843496  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:36.896885  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:36.937925  333222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:00:36.986902  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:37.060974  333222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:37.097252  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:00:37.137037  333222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:00:37.176970  333222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:00:37.210699  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:00:37.452714  333222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:00:37.897762  333222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:00:37.897889  333222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:00:37.908924  333222 start.go:563] Will wait 60s for crictl version
	I0831 23:00:37.909039  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:00:37.912861  333222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:00:38.007011  333222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:00:38.007191  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:00:38.139009  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:00:38.278109  333222 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:00:38.280757  333222 out.go:177]   - env NO_PROXY=192.168.49.2
	I0831 23:00:38.283462  333222 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:00:38.318851  333222 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:00:38.322773  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:00:38.342221  333222 mustload.go:65] Loading cluster: ha-330867
	I0831 23:00:38.342476  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:38.342747  333222 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:00:38.372487  333222 host.go:66] Checking if "ha-330867" exists ...
	I0831 23:00:38.372772  333222 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.3
	I0831 23:00:38.372780  333222 certs.go:194] generating shared ca certs ...
	I0831 23:00:38.372794  333222 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:00:38.372902  333222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:00:38.372940  333222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:00:38.372946  333222 certs.go:256] generating profile certs ...
	I0831 23:00:38.373020  333222 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key
	I0831 23:00:38.373068  333222 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.eb02dd35
	I0831 23:00:38.373105  333222 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key
	I0831 23:00:38.373114  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:00:38.373127  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:00:38.373138  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:00:38.373149  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:00:38.373159  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:00:38.373170  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:00:38.373182  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:00:38.373192  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:00:38.373243  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:00:38.373273  333222 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:00:38.373281  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:00:38.373303  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:00:38.373326  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:00:38.373347  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:00:38.373393  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:00:38.373419  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:00:38.373431  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:00:38.373442  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:38.373496  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:00:38.400523  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:00:38.508688  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 23:00:38.516598  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 23:00:38.546099  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 23:00:38.558701  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 23:00:38.582966  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 23:00:38.595196  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 23:00:38.618006  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 23:00:38.632861  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 23:00:38.669006  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 23:00:38.674641  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 23:00:38.686612  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 23:00:38.690099  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 23:00:38.702186  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:00:38.727028  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:00:38.751665  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:00:38.793779  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:00:38.834515  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 23:00:38.877736  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:00:38.917680  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:00:38.952888  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:00:38.997889  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:00:39.066024  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:00:39.108424  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:00:39.155266  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 23:00:39.178778  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 23:00:39.209411  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 23:00:39.238212  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 23:00:39.267000  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 23:00:39.299377  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 23:00:39.331360  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 23:00:39.353201  333222 ssh_runner.go:195] Run: openssl version
	I0831 23:00:39.359977  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:00:39.370156  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:00:39.374657  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:00:39.374739  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:00:39.383717  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:00:39.393266  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:00:39.403634  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:00:39.408187  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:00:39.408276  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:00:39.419670  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:00:39.429153  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:00:39.438971  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:39.443318  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:39.443395  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:00:39.451186  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:00:39.460298  333222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:00:39.464768  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:00:39.472526  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:00:39.480139  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:00:39.487804  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:00:39.495488  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:00:39.503595  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:00:39.511815  333222 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0831 23:00:39.511938  333222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:00:39.511975  333222 kube-vip.go:115] generating kube-vip config ...
	I0831 23:00:39.512036  333222 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0831 23:00:39.530212  333222 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 23:00:39.530284  333222 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 23:00:39.530363  333222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:00:39.540878  333222 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:00:39.540967  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 23:00:39.550724  333222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 23:00:39.571398  333222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:00:39.592520  333222 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 23:00:39.613030  333222 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:00:39.617165  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:00:39.631362  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:00:39.783606  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:00:39.802721  333222 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:00:39.803071  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:00:39.806264  333222 out.go:177] * Verifying Kubernetes components...
	I0831 23:00:39.808739  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:00:39.978533  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:00:39.998598  333222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:00:39.998897  333222 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 23:00:39.998966  333222 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0831 23:00:39.999218  333222 node_ready.go:35] waiting up to 6m0s for node "ha-330867-m02" to be "Ready" ...
	I0831 23:00:39.999304  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:00:39.999324  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:39.999339  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:39.999344  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:53.167165  333222 round_trippers.go:574] Response Status: 500 Internal Server Error in 13167 milliseconds
	I0831 23:00:53.167730  333222 node_ready.go:53] error getting node "ha-330867-m02": etcdserver: request timed out
	I0831 23:00:53.167807  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:00:53.167816  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:53.167824  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:53.167831  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.566905  333222 round_trippers.go:574] Response Status: 200 OK in 6399 milliseconds
	I0831 23:00:59.568935  333222 node_ready.go:49] node "ha-330867-m02" has status "Ready":"True"
	I0831 23:00:59.568958  333222 node_ready.go:38] duration metric: took 19.569720335s for node "ha-330867-m02" to be "Ready" ...
	I0831 23:00:59.568975  333222 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:00:59.569015  333222 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 23:00:59.569028  333222 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 23:00:59.569113  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:00:59.569119  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.569131  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.569136  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.619042  333222 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0831 23:00:59.649323  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.649523  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:00:59.649551  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.649580  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.649598  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.673921  333222 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0831 23:00:59.675168  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:00:59.675185  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.675194  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.675198  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.697500  333222 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0831 23:00:59.698452  333222 pod_ready.go:93] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"True"
	I0831 23:00:59.698471  333222 pod_ready.go:82] duration metric: took 49.043001ms for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.698482  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.698549  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-drznk
	I0831 23:00:59.698554  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.698561  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.698566  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.705296  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:00:59.706411  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:00:59.706461  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.706483  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.706504  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.716717  333222 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0831 23:00:59.718069  333222 pod_ready.go:93] pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace has status "Ready":"True"
	I0831 23:00:59.718125  333222 pod_ready.go:82] duration metric: took 19.634517ms for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.718152  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.718246  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867
	I0831 23:00:59.718273  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.718308  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.718328  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.725842  333222 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:00:59.726898  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:00:59.726948  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.726973  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.726993  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.733757  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:00:59.736375  333222 pod_ready.go:93] pod "etcd-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:00:59.736451  333222 pod_ready.go:82] duration metric: took 18.277986ms for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.736479  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.736576  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m02
	I0831 23:00:59.736602  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.736624  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.736644  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.739648  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:00:59.741081  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:00:59.741139  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.741165  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.741187  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.746752  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:00:59.747870  333222 pod_ready.go:93] pod "etcd-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:00:59.747951  333222 pod_ready.go:82] duration metric: took 11.433643ms for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.747981  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.748084  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:00:59.748109  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.748135  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.748154  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.753070  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:00:59.769471  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:00:59.769546  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.769576  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.769597  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.779325  333222 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 23:00:59.781025  333222 pod_ready.go:93] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:00:59.781131  333222 pod_ready.go:82] duration metric: took 33.122613ms for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.781200  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:00:59.969615  333222 request.go:632] Waited for 188.268154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:00:59.969752  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:00:59.969797  333222 round_trippers.go:469] Request Headers:
	I0831 23:00:59.969852  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:00:59.969870  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:00:59.973793  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:00.170069  333222 request.go:632] Waited for 195.356221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:00.170197  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:00.170237  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:00.170267  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:00.170290  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:00.274303  333222 round_trippers.go:574] Response Status: 200 OK in 103 milliseconds
	I0831 23:01:00.275005  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:00.275079  333222 pod_ready.go:82] duration metric: took 493.833507ms for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:00.275114  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:00.369432  333222 request.go:632] Waited for 94.206581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:01:00.370454  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:01:00.370494  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:00.370530  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:00.370573  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:00.427944  333222 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0831 23:01:00.569536  333222 request.go:632] Waited for 140.210239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:00.569601  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:00.569615  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:00.569624  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:00.569635  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:00.603286  333222 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0831 23:01:00.604707  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:00.604736  333222 pod_ready.go:82] duration metric: took 329.582049ms for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:00.604749  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:00.769999  333222 request.go:632] Waited for 165.184746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:01:00.770063  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:01:00.770076  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:00.770085  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:00.770095  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:00.773520  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:00.969604  333222 request.go:632] Waited for 195.372385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:00.969667  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:00.969673  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:00.969681  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:00.969686  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:00.972532  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:00.973405  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:00.973458  333222 pod_ready.go:82] duration metric: took 368.700576ms for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:00.973488  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:01.169304  333222 request.go:632] Waited for 195.717474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:01:01.169436  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:01:01.169448  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:01.169457  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:01.169462  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:01.172855  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:01.370144  333222 request.go:632] Waited for 196.35944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:01.370280  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:01.370351  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:01.370361  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:01.370366  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:01.374120  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:01.374795  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:01.374856  333222 pod_ready.go:82] duration metric: took 401.346712ms for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:01.374884  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:01.569350  333222 request.go:632] Waited for 194.371455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:01:01.569485  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:01:01.569519  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:01.569549  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:01.569571  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:01.572638  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:01.770097  333222 request.go:632] Waited for 196.295293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:01.770220  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:01.770257  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:01.770297  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:01.770317  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:01.774301  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:01.775609  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:01.775701  333222 pod_ready.go:82] duration metric: took 400.794732ms for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:01.775731  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:01.970027  333222 request.go:632] Waited for 194.185372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:01:01.970155  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:01:01.970197  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:01.970227  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:01.970253  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.006508  333222 round_trippers.go:574] Response Status: 200 OK in 36 milliseconds
	I0831 23:01:02.169381  333222 request.go:632] Waited for 155.307213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:02.169439  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:02.169450  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:02.169459  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:02.169463  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.175487  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:01:02.176141  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:02.176159  333222 pod_ready.go:82] duration metric: took 400.384535ms for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:02.176171  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:02.370120  333222 request.go:632] Waited for 193.850958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:01:02.370239  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:01:02.370263  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:02.370283  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:02.370288  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.373352  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:02.569624  333222 request.go:632] Waited for 195.371277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:02.569696  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:02.569703  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:02.569717  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:02.569727  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.572723  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:02.573769  333222 pod_ready.go:93] pod "kube-proxy-2km6v" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:02.573791  333222 pod_ready.go:82] duration metric: took 397.613007ms for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:02.573803  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:02.769842  333222 request.go:632] Waited for 195.945157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:01:02.769945  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:01:02.769959  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:02.769966  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:02.769969  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.772958  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:02.969948  333222 request.go:632] Waited for 196.329606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:01:02.970035  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:01:02.970042  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:02.970050  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:02.970055  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:02.972981  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:02.973580  333222 pod_ready.go:93] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:02.973603  333222 pod_ready.go:82] duration metric: took 399.792039ms for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:02.973617  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:03.169406  333222 request.go:632] Waited for 195.725163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:01:03.169481  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:01:03.169491  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:03.169507  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:03.169512  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:03.172289  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:03.369654  333222 request.go:632] Waited for 196.122149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:03.369711  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:03.369720  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:03.369728  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:03.369736  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:03.372523  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:03.373439  333222 pod_ready.go:93] pod "kube-proxy-72g7x" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:03.373470  333222 pod_ready.go:82] duration metric: took 399.844149ms for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:03.373482  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:03.569482  333222 request.go:632] Waited for 195.934999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:01:03.569566  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:01:03.569594  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:03.569608  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:03.569613  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:03.572514  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:03.769523  333222 request.go:632] Waited for 196.279449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:03.769590  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:03.769599  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:03.769607  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:03.769616  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:03.772469  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:03.773307  333222 pod_ready.go:93] pod "kube-proxy-fzpmn" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:03.773328  333222 pod_ready.go:82] duration metric: took 399.838816ms for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:03.773353  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:03.969267  333222 request.go:632] Waited for 195.820013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:01:03.969353  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:01:03.969370  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:03.969379  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:03.969387  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:03.987094  333222 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0831 23:01:04.170207  333222 request.go:632] Waited for 182.322094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:04.170273  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:01:04.170283  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:04.170292  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:04.170300  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:04.173270  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:04.173929  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:04.173958  333222 pod_ready.go:82] duration metric: took 400.591739ms for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:04.173971  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:04.369458  333222 request.go:632] Waited for 195.416052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:01:04.369560  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:01:04.369570  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:04.369579  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:04.369586  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:04.373271  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:04.569189  333222 request.go:632] Waited for 195.184948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:04.569269  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:01:04.569278  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:04.569286  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:04.569295  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:04.572020  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:04.572656  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:04.572679  333222 pod_ready.go:82] duration metric: took 398.69963ms for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:04.572690  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:04.769430  333222 request.go:632] Waited for 196.676657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:01:04.769497  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:01:04.769507  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:04.769530  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:04.769538  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:04.772378  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:01:04.970028  333222 request.go:632] Waited for 196.329401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:04.970106  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:01:04.970131  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:04.970143  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:04.970153  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:04.973404  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:04.974151  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:01:04.974176  333222 pod_ready.go:82] duration metric: took 401.478216ms for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:01:04.974190  333222 pod_ready.go:39] duration metric: took 5.405204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:01:04.974205  333222 api_server.go:52] waiting for apiserver process to appear ...
	I0831 23:01:04.974273  333222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:01:04.985070  333222 api_server.go:72] duration metric: took 25.182302416s to wait for apiserver process to appear ...
	I0831 23:01:04.985134  333222 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:01:04.985165  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:04.992933  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:04.992963  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:05.485673  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:05.502509  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:05.502598  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:05.986179  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:06.003960  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:06.004047  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:06.485305  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:06.493200  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:06.493276  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:06.985716  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:06.993491  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:06.993526  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:07.486122  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:07.493733  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:07.493760  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:07.985298  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:07.993392  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:07.993425  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:08.485784  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:08.495466  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:08.495536  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:08.986193  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:08.994358  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:08.994457  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:09.486089  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:09.494012  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:09.494044  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:09.985459  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:09.993389  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:09.993426  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:10.485284  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:10.493485  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:10.493521  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:10.985675  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:10.993554  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:10.993585  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:11.485917  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:11.493738  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:11.493777  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:11.986140  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:11.998061  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:11.998098  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:12.485348  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:12.494119  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:12.494167  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:12.985290  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:12.995077  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:12.995108  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:13.486228  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:13.494717  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:13.494793  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:13.985263  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:13.993272  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:13.993312  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:14.485912  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:14.493668  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:14.493702  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:14.985247  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:14.993971  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:14.994004  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:15.485855  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:15.493698  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:15.493729  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:15.986029  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:15.994270  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:15.994301  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:16.485565  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:16.493927  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:16.493962  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:16.985334  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:16.993057  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:16.993096  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:17.485295  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:17.494492  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:17.494587  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:17.986147  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:17.994091  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:17.994130  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:18.485692  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:18.494871  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:18.494901  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:18.985393  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:18.993901  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:18.993953  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:19.485372  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:19.495286  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:19.495323  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:19.985563  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:20.021850  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:20.021883  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:20.485581  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:20.555121  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:20.555155  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:20.985498  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:20.994708  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:20.994976  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:21.485276  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:21.493814  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:21.493897  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:21.985361  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:22.017405  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:22.017501  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:22.486078  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:22.499727  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:22.499825  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:22.985314  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:22.995260  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:22.995350  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:23.486006  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:23.501105  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:23.501209  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:23.985864  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:23.994412  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:23.994493  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:24.486097  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:24.509011  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:24.509037  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:24.985334  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:24.993228  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:24.993260  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:25.485693  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:25.493691  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:25.493729  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:25.985286  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:25.993890  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:25.993919  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:26.485280  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:26.493324  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:26.493355  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:26.986018  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:26.993899  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:26.993929  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:27.485537  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:27.493705  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:27.493734  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:27.986197  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:27.993960  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:27.994003  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:28.485602  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:28.493902  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:28.493931  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:28.985349  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:28.993293  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:28.993322  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:29.485890  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:29.493571  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:29.493599  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:29.985257  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:29.992959  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:29.992986  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:30.485677  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:30.494147  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:30.494179  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:30.985318  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:30.993846  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:30.993878  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:31.485428  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:31.493572  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:31.493606  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:31.986110  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:31.995154  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:31.995185  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:32.485386  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:32.493586  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:32.493615  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:32.986273  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:32.995586  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:32.995613  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:33.485269  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:33.493022  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:33.493045  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:33.985298  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:33.993089  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:33.993115  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:34.485688  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:34.493716  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:34.493752  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:34.985462  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:34.993755  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:34.993792  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:35.485513  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:35.493296  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:35.493327  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:35.985620  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:35.993864  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:35.993899  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:36.485309  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:36.492956  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:36.492989  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:36.985321  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:36.993441  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:36.993467  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:37.486085  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:37.493965  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:37.493998  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:37.985361  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:37.993393  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:37.993429  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:38.485249  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:38.494895  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:38.494926  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:38.985435  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:38.993215  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:38.993243  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:39.485893  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:39.493825  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:39.493857  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:39.985512  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:01:39.985626  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:01:40.064116  333222 cri.go:89] found id: "fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:40.064141  333222 cri.go:89] found id: "5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:40.064149  333222 cri.go:89] found id: ""
	I0831 23:01:40.064157  333222 logs.go:276] 2 containers: [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4]
	I0831 23:01:40.064222  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.076130  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.081135  333222 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:01:40.081213  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:01:40.128886  333222 cri.go:89] found id: "aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:40.128907  333222 cri.go:89] found id: "aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:40.128913  333222 cri.go:89] found id: ""
	I0831 23:01:40.128920  333222 logs.go:276] 2 containers: [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b]
	I0831 23:01:40.128986  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.136029  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.140698  333222 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:01:40.140781  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:01:40.183191  333222 cri.go:89] found id: ""
	I0831 23:01:40.183216  333222 logs.go:276] 0 containers: []
	W0831 23:01:40.183225  333222 logs.go:278] No container was found matching "coredns"
	I0831 23:01:40.183232  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:01:40.183295  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:01:40.230808  333222 cri.go:89] found id: "0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:40.230830  333222 cri.go:89] found id: "10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:40.230835  333222 cri.go:89] found id: ""
	I0831 23:01:40.230843  333222 logs.go:276] 2 containers: [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e]
	I0831 23:01:40.230903  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.234708  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.238408  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:01:40.238483  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:01:40.295211  333222 cri.go:89] found id: "e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:40.295231  333222 cri.go:89] found id: ""
	I0831 23:01:40.295239  333222 logs.go:276] 1 containers: [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a]
	I0831 23:01:40.295328  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.298995  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:01:40.299068  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:01:40.339071  333222 cri.go:89] found id: "be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:40.339096  333222 cri.go:89] found id: "39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:40.339101  333222 cri.go:89] found id: ""
	I0831 23:01:40.339108  333222 logs.go:276] 2 containers: [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8]
	I0831 23:01:40.339167  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.342843  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.346603  333222 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:01:40.346692  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:01:40.385550  333222 cri.go:89] found id: "ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:40.385571  333222 cri.go:89] found id: ""
	I0831 23:01:40.385579  333222 logs.go:276] 1 containers: [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6]
	I0831 23:01:40.385634  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:40.389306  333222 logs.go:123] Gathering logs for etcd [aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b] ...
	I0831 23:01:40.389375  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:40.441910  333222 logs.go:123] Gathering logs for kindnet [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6] ...
	I0831 23:01:40.441948  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:40.490173  333222 logs.go:123] Gathering logs for kubelet ...
	I0831 23:01:40.490204  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:01:40.573866  333222 logs.go:123] Gathering logs for kube-apiserver [5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4] ...
	I0831 23:01:40.573903  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:40.618898  333222 logs.go:123] Gathering logs for kube-scheduler [10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e] ...
	I0831 23:01:40.618979  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:40.654676  333222 logs.go:123] Gathering logs for dmesg ...
	I0831 23:01:40.654705  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:01:40.671666  333222 logs.go:123] Gathering logs for etcd [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625] ...
	I0831 23:01:40.671695  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:40.737151  333222 logs.go:123] Gathering logs for kube-controller-manager [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5] ...
	I0831 23:01:40.737186  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:40.800090  333222 logs.go:123] Gathering logs for container status ...
	I0831 23:01:40.800127  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:01:40.848302  333222 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:01:40.848331  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:01:41.494967  333222 logs.go:123] Gathering logs for kube-scheduler [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf] ...
	I0831 23:01:41.495004  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:41.559068  333222 logs.go:123] Gathering logs for kube-controller-manager [39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8] ...
	I0831 23:01:41.559100  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:41.607905  333222 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:01:41.607934  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:01:41.705867  333222 logs.go:123] Gathering logs for kube-apiserver [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a] ...
	I0831 23:01:41.705909  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:41.780753  333222 logs.go:123] Gathering logs for kube-proxy [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a] ...
	I0831 23:01:41.780798  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:44.330604  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:46.147479  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:01:46.147512  333222 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:01:46.147540  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:01:46.147601  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:01:46.204347  333222 cri.go:89] found id: "fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:46.204371  333222 cri.go:89] found id: "5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:46.204376  333222 cri.go:89] found id: ""
	I0831 23:01:46.204383  333222 logs.go:276] 2 containers: [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4]
	I0831 23:01:46.204502  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.208882  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.213882  333222 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:01:46.213954  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:01:46.271881  333222 cri.go:89] found id: "aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:46.271906  333222 cri.go:89] found id: "aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:46.271912  333222 cri.go:89] found id: ""
	I0831 23:01:46.271919  333222 logs.go:276] 2 containers: [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b]
	I0831 23:01:46.271984  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.275676  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.279063  333222 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:01:46.279157  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:01:46.324123  333222 cri.go:89] found id: ""
	I0831 23:01:46.324147  333222 logs.go:276] 0 containers: []
	W0831 23:01:46.324157  333222 logs.go:278] No container was found matching "coredns"
	I0831 23:01:46.324163  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:01:46.324229  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:01:46.362239  333222 cri.go:89] found id: "0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:46.362260  333222 cri.go:89] found id: "10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:46.362265  333222 cri.go:89] found id: ""
	I0831 23:01:46.362273  333222 logs.go:276] 2 containers: [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e]
	I0831 23:01:46.362331  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.365977  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.369429  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:01:46.369505  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:01:46.407785  333222 cri.go:89] found id: "e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:46.407808  333222 cri.go:89] found id: ""
	I0831 23:01:46.407817  333222 logs.go:276] 1 containers: [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a]
	I0831 23:01:46.407871  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.411284  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:01:46.411352  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:01:46.449382  333222 cri.go:89] found id: "be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:46.449402  333222 cri.go:89] found id: "39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:46.449407  333222 cri.go:89] found id: ""
	I0831 23:01:46.449414  333222 logs.go:276] 2 containers: [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8]
	I0831 23:01:46.449473  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.453006  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.456441  333222 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:01:46.456516  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:01:46.502329  333222 cri.go:89] found id: "ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:46.502350  333222 cri.go:89] found id: ""
	I0831 23:01:46.502357  333222 logs.go:276] 1 containers: [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6]
	I0831 23:01:46.502415  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:46.506011  333222 logs.go:123] Gathering logs for etcd [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625] ...
	I0831 23:01:46.506055  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:46.566273  333222 logs.go:123] Gathering logs for kindnet [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6] ...
	I0831 23:01:46.566309  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:46.607649  333222 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:01:46.607676  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:01:47.024333  333222 logs.go:123] Gathering logs for kube-apiserver [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a] ...
	I0831 23:01:47.024376  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:47.097409  333222 logs.go:123] Gathering logs for kube-controller-manager [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5] ...
	I0831 23:01:47.097448  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:47.212057  333222 logs.go:123] Gathering logs for kube-apiserver [5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4] ...
	I0831 23:01:47.212094  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:47.296524  333222 logs.go:123] Gathering logs for kube-controller-manager [39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8] ...
	I0831 23:01:47.296664  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:47.352747  333222 logs.go:123] Gathering logs for container status ...
	I0831 23:01:47.352820  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:01:47.421443  333222 logs.go:123] Gathering logs for kubelet ...
	I0831 23:01:47.421512  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:01:47.510596  333222 logs.go:123] Gathering logs for dmesg ...
	I0831 23:01:47.510676  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:01:47.528101  333222 logs.go:123] Gathering logs for etcd [aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b] ...
	I0831 23:01:47.528129  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:47.584333  333222 logs.go:123] Gathering logs for kube-scheduler [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf] ...
	I0831 23:01:47.584369  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:47.625492  333222 logs.go:123] Gathering logs for kube-scheduler [10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e] ...
	I0831 23:01:47.625524  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:47.665344  333222 logs.go:123] Gathering logs for kube-proxy [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a] ...
	I0831 23:01:47.665378  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:47.734435  333222 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:01:47.734463  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:01:50.310542  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:01:50.319928  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 23:01:50.320018  333222 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0831 23:01:50.320031  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:50.320041  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:50.320045  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:50.333132  333222 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0831 23:01:50.333283  333222 api_server.go:141] control plane version: v1.31.0
	I0831 23:01:50.333308  333222 api_server.go:131] duration metric: took 45.348155474s to wait for apiserver health ...
	I0831 23:01:50.333318  333222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 23:01:50.333345  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:01:50.333410  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:01:50.379633  333222 cri.go:89] found id: "fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:50.379658  333222 cri.go:89] found id: "5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:50.379662  333222 cri.go:89] found id: ""
	I0831 23:01:50.379669  333222 logs.go:276] 2 containers: [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4]
	I0831 23:01:50.379724  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.383384  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.386875  333222 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:01:50.386945  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:01:50.426538  333222 cri.go:89] found id: "aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:50.426563  333222 cri.go:89] found id: "aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:50.426569  333222 cri.go:89] found id: ""
	I0831 23:01:50.426576  333222 logs.go:276] 2 containers: [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b]
	I0831 23:01:50.426632  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.430401  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.434410  333222 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:01:50.434480  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:01:50.471919  333222 cri.go:89] found id: ""
	I0831 23:01:50.471944  333222 logs.go:276] 0 containers: []
	W0831 23:01:50.471954  333222 logs.go:278] No container was found matching "coredns"
	I0831 23:01:50.471961  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:01:50.472030  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:01:50.509784  333222 cri.go:89] found id: "0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:50.509808  333222 cri.go:89] found id: "10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:50.509813  333222 cri.go:89] found id: ""
	I0831 23:01:50.509825  333222 logs.go:276] 2 containers: [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e]
	I0831 23:01:50.509910  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.513622  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.517064  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:01:50.517161  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:01:50.554322  333222 cri.go:89] found id: "e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:50.554344  333222 cri.go:89] found id: ""
	I0831 23:01:50.554352  333222 logs.go:276] 1 containers: [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a]
	I0831 23:01:50.554426  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.558016  333222 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:01:50.558139  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:01:50.598228  333222 cri.go:89] found id: "be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:50.598258  333222 cri.go:89] found id: "39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:50.598264  333222 cri.go:89] found id: ""
	I0831 23:01:50.598270  333222 logs.go:276] 2 containers: [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8]
	I0831 23:01:50.598328  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.602816  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.606254  333222 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:01:50.606343  333222 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:01:50.644649  333222 cri.go:89] found id: "ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:50.644669  333222 cri.go:89] found id: ""
	I0831 23:01:50.644677  333222 logs.go:276] 1 containers: [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6]
	I0831 23:01:50.644731  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:01:50.648128  333222 logs.go:123] Gathering logs for kubelet ...
	I0831 23:01:50.648153  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:01:50.739457  333222 logs.go:123] Gathering logs for dmesg ...
	I0831 23:01:50.739502  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:01:50.757595  333222 logs.go:123] Gathering logs for kube-apiserver [fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a] ...
	I0831 23:01:50.757626  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0c65aa94dee8cda50d47b3d5eacea320d313b6662b7baadf65b44c6d52592a"
	I0831 23:01:50.828874  333222 logs.go:123] Gathering logs for kube-scheduler [0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf] ...
	I0831 23:01:50.828910  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b0a4c672012aecd1ee95d46fc26d2c181cc79c7388cd2ad8b35804d32bcaeaf"
	I0831 23:01:50.875080  333222 logs.go:123] Gathering logs for kube-scheduler [10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e] ...
	I0831 23:01:50.875111  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ac90c198bec3f53cde2aee178f1cc3bdce785b292c90fa4980e5b818440a6e"
	I0831 23:01:50.930574  333222 logs.go:123] Gathering logs for kube-proxy [e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a] ...
	I0831 23:01:50.930644  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5bb0fdeb0c305a6e0c20ab3b5587693040397e301cd57e5679de48890cc0f0a"
	I0831 23:01:50.977471  333222 logs.go:123] Gathering logs for kindnet [ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6] ...
	I0831 23:01:50.977499  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ece582614fc7ced631aba780810d20e628888d4818c6e6216627681c7acfdfc6"
	I0831 23:01:51.036050  333222 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:01:51.036081  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:01:51.111001  333222 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:01:51.111040  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:01:51.403213  333222 logs.go:123] Gathering logs for kube-apiserver [5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4] ...
	I0831 23:01:51.403323  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fe235cf9ba0e1e511e7507d0a519488cff196515dc9b93d2ea0ff08a0ef27f4"
	I0831 23:01:51.447070  333222 logs.go:123] Gathering logs for etcd [aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b] ...
	I0831 23:01:51.447100  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeb71071ca6488724f36aaba5bebbff2b889c3e603a843a1f41be8099702ed7b"
	I0831 23:01:51.514399  333222 logs.go:123] Gathering logs for kube-controller-manager [be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5] ...
	I0831 23:01:51.514435  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be1899b6fae5af5f319c9df88a30827d886cf8d17bbf2ceed0e343a01b4f35a5"
	I0831 23:01:51.573347  333222 logs.go:123] Gathering logs for kube-controller-manager [39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8] ...
	I0831 23:01:51.573385  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39162654cddd07aa8c25ab071f6b247ca652ecc23b52dc01256ca2668169cfe8"
	I0831 23:01:51.615922  333222 logs.go:123] Gathering logs for container status ...
	I0831 23:01:51.615953  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:01:51.666793  333222 logs.go:123] Gathering logs for etcd [aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625] ...
	I0831 23:01:51.666824  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aaad87fb42bc37c90ca71e2026d1ecbbb02a671f3cbd0b252bfaa783ca6c4625"
	I0831 23:01:54.232567  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:01:54.232591  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:54.232601  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:54.232606  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:54.240709  333222 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:01:54.250610  333222 system_pods.go:59] 26 kube-system pods found
	I0831 23:01:54.250653  333222 system_pods.go:61] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:01:54.250664  333222 system_pods.go:61] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:01:54.250672  333222 system_pods.go:61] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:01:54.250678  333222 system_pods.go:61] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:01:54.250682  333222 system_pods.go:61] "etcd-ha-330867-m03" [0688d1d4-4d26-496b-9ee4-8693edd282a8] Running
	I0831 23:01:54.250694  333222 system_pods.go:61] "kindnet-6wdgz" [497cc13e-5136-42e9-ba89-f71d53d3c0bc] Running
	I0831 23:01:54.250704  333222 system_pods.go:61] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:01:54.250716  333222 system_pods.go:61] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:01:54.250720  333222 system_pods.go:61] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:01:54.250727  333222 system_pods.go:61] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0831 23:01:54.250736  333222 system_pods.go:61] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:01:54.250742  333222 system_pods.go:61] "kube-apiserver-ha-330867-m03" [db86dc83-44c0-44fd-978c-d924a8207e12] Running
	I0831 23:01:54.250749  333222 system_pods.go:61] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0831 23:01:54.250758  333222 system_pods.go:61] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:01:54.250870  333222 system_pods.go:61] "kube-controller-manager-ha-330867-m03" [aaee5c81-aef3-4507-b706-1d6735f176c8] Running
	I0831 23:01:54.250883  333222 system_pods.go:61] "kube-proxy-2km6v" [31c060ec-f4ae-400a-85b9-6dfadada3a5c] Running
	I0831 23:01:54.250888  333222 system_pods.go:61] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:01:54.250892  333222 system_pods.go:61] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:01:54.250896  333222 system_pods.go:61] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:01:54.250904  333222 system_pods.go:61] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:01:54.250908  333222 system_pods.go:61] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:01:54.250912  333222 system_pods.go:61] "kube-scheduler-ha-330867-m03" [d0436eba-7021-46d0-bf6e-b414cb5fbfde] Running
	I0831 23:01:54.250920  333222 system_pods.go:61] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:01:54.250925  333222 system_pods.go:61] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:01:54.250929  333222 system_pods.go:61] "kube-vip-ha-330867-m03" [ccbc0f53-8220-4eb8-9ebc-9b8c96d58838] Running
	I0831 23:01:54.250933  333222 system_pods.go:61] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:01:54.250940  333222 system_pods.go:74] duration metric: took 3.917612527s to wait for pod list to return data ...
	I0831 23:01:54.250952  333222 default_sa.go:34] waiting for default service account to be created ...
	I0831 23:01:54.251047  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0831 23:01:54.251059  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:54.251067  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:54.251072  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:54.254690  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:54.254979  333222 default_sa.go:45] found service account: "default"
	I0831 23:01:54.255000  333222 default_sa.go:55] duration metric: took 4.04088ms for default service account to be created ...
	I0831 23:01:54.255010  333222 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 23:01:54.255075  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:01:54.255085  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:54.255093  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:54.255098  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:54.260275  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:01:54.270393  333222 system_pods.go:86] 26 kube-system pods found
	I0831 23:01:54.270432  333222 system_pods.go:89] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:01:54.270445  333222 system_pods.go:89] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:01:54.270452  333222 system_pods.go:89] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:01:54.270459  333222 system_pods.go:89] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:01:54.270471  333222 system_pods.go:89] "etcd-ha-330867-m03" [0688d1d4-4d26-496b-9ee4-8693edd282a8] Running
	I0831 23:01:54.270476  333222 system_pods.go:89] "kindnet-6wdgz" [497cc13e-5136-42e9-ba89-f71d53d3c0bc] Running
	I0831 23:01:54.270487  333222 system_pods.go:89] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:01:54.270493  333222 system_pods.go:89] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:01:54.270504  333222 system_pods.go:89] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:01:54.270511  333222 system_pods.go:89] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0831 23:01:54.270528  333222 system_pods.go:89] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:01:54.270534  333222 system_pods.go:89] "kube-apiserver-ha-330867-m03" [db86dc83-44c0-44fd-978c-d924a8207e12] Running
	I0831 23:01:54.270552  333222 system_pods.go:89] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0831 23:01:54.270558  333222 system_pods.go:89] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:01:54.270566  333222 system_pods.go:89] "kube-controller-manager-ha-330867-m03" [aaee5c81-aef3-4507-b706-1d6735f176c8] Running
	I0831 23:01:54.270576  333222 system_pods.go:89] "kube-proxy-2km6v" [31c060ec-f4ae-400a-85b9-6dfadada3a5c] Running
	I0831 23:01:54.270580  333222 system_pods.go:89] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:01:54.270586  333222 system_pods.go:89] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:01:54.270596  333222 system_pods.go:89] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:01:54.270600  333222 system_pods.go:89] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:01:54.270605  333222 system_pods.go:89] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:01:54.270616  333222 system_pods.go:89] "kube-scheduler-ha-330867-m03" [d0436eba-7021-46d0-bf6e-b414cb5fbfde] Running
	I0831 23:01:54.270622  333222 system_pods.go:89] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:01:54.270626  333222 system_pods.go:89] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:01:54.270638  333222 system_pods.go:89] "kube-vip-ha-330867-m03" [ccbc0f53-8220-4eb8-9ebc-9b8c96d58838] Running
	I0831 23:01:54.270643  333222 system_pods.go:89] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:01:54.270651  333222 system_pods.go:126] duration metric: took 15.635114ms to wait for k8s-apps to be running ...
	I0831 23:01:54.270663  333222 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 23:01:54.270729  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:01:54.287959  333222 system_svc.go:56] duration metric: took 17.284926ms WaitForService to wait for kubelet
	I0831 23:01:54.287991  333222 kubeadm.go:582] duration metric: took 1m14.485229588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:01:54.288010  333222 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:01:54.288086  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0831 23:01:54.288098  333222 round_trippers.go:469] Request Headers:
	I0831 23:01:54.288107  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:01:54.288113  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:01:54.291726  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:01:54.293640  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:01:54.293677  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:01:54.293690  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:01:54.293695  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:01:54.293700  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:01:54.293704  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:01:54.293709  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:01:54.293714  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:01:54.293719  333222 node_conditions.go:105] duration metric: took 5.703624ms to run NodePressure ...
	I0831 23:01:54.293736  333222 start.go:241] waiting for startup goroutines ...
	I0831 23:01:54.293763  333222 start.go:255] writing updated cluster config ...
	I0831 23:01:54.297979  333222 out.go:201] 
	I0831 23:01:54.301207  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:01:54.301396  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:01:54.304656  333222 out.go:177] * Starting "ha-330867-m03" control-plane node in "ha-330867" cluster
	I0831 23:01:54.308231  333222 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:01:54.310982  333222 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:01:54.313876  333222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:01:54.313963  333222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:01:54.314220  333222 cache.go:56] Caching tarball of preloaded images
	I0831 23:01:54.314375  333222 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:01:54.314393  333222 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:01:54.314589  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:01:54.337603  333222 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:01:54.337626  333222 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:01:54.337704  333222 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:01:54.337727  333222 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:01:54.337736  333222 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:01:54.337745  333222 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:01:54.337754  333222 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:01:54.339008  333222 image.go:273] response: 
	I0831 23:01:54.465296  333222 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:01:54.465334  333222 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:01:54.465368  333222 start.go:360] acquireMachinesLock for ha-330867-m03: {Name:mkd7fe9439318c2f603d7366e69d323ff150955e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:01:54.465437  333222 start.go:364] duration metric: took 45.119µs to acquireMachinesLock for "ha-330867-m03"
	I0831 23:01:54.465464  333222 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:01:54.465472  333222 fix.go:54] fixHost starting: m03
	I0831 23:01:54.465754  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m03 --format={{.State.Status}}
	I0831 23:01:54.482750  333222 fix.go:112] recreateIfNeeded on ha-330867-m03: state=Stopped err=<nil>
	W0831 23:01:54.482799  333222 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:01:54.487664  333222 out.go:177] * Restarting existing docker container for "ha-330867-m03" ...
	I0831 23:01:54.490536  333222 cli_runner.go:164] Run: docker start ha-330867-m03
	I0831 23:01:54.828133  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m03 --format={{.State.Status}}
	I0831 23:01:54.865801  333222 kic.go:435] container "ha-330867-m03" state is running.
	I0831 23:01:54.866196  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m03
	I0831 23:01:54.893599  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:01:54.893837  333222 machine.go:93] provisionDockerMachine start ...
	I0831 23:01:54.893900  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:01:54.929299  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:01:54.929539  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0831 23:01:54.929547  333222 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:01:54.930633  333222 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37676->127.0.0.1:33183: read: connection reset by peer
	I0831 23:01:58.193023  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m03
	
	I0831 23:01:58.193102  333222 ubuntu.go:169] provisioning hostname "ha-330867-m03"
	I0831 23:01:58.193180  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:01:58.238158  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:01:58.238393  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0831 23:01:58.238405  333222 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867-m03 && echo "ha-330867-m03" | sudo tee /etc/hostname
	I0831 23:01:58.448555  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m03
	
	I0831 23:01:58.448715  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:01:58.475288  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:01:58.475539  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0831 23:01:58.475556  333222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:01:58.715988  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:01:58.716013  333222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:01:58.716032  333222 ubuntu.go:177] setting up certificates
	I0831 23:01:58.716042  333222 provision.go:84] configureAuth start
	I0831 23:01:58.716110  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m03
	I0831 23:01:58.750186  333222 provision.go:143] copyHostCerts
	I0831 23:01:58.750230  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:01:58.750265  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:01:58.750278  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:01:58.750355  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:01:58.750440  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:01:58.750462  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:01:58.750469  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:01:58.750496  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:01:58.750540  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:01:58.750562  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:01:58.750567  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:01:58.750593  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:01:58.750645  333222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867-m03 san=[127.0.0.1 192.168.49.4 ha-330867-m03 localhost minikube]
	I0831 23:02:00.108864  333222 provision.go:177] copyRemoteCerts
	I0831 23:02:00.109002  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:02:00.109106  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:00.144712  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 23:02:00.299224  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:02:00.299298  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:02:00.370990  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:02:00.371065  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:02:00.458745  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:02:00.458874  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 23:02:00.546713  333222 provision.go:87] duration metric: took 1.830652108s to configureAuth
	I0831 23:02:00.546801  333222 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:02:00.547131  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:00.547301  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:00.586401  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:02:00.586655  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33183 <nil> <nil>}
	I0831 23:02:00.586670  333222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:02:02.120880  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:02:02.120907  333222 machine.go:96] duration metric: took 7.227059871s to provisionDockerMachine
	I0831 23:02:02.120918  333222 start.go:293] postStartSetup for "ha-330867-m03" (driver="docker")
	I0831 23:02:02.120930  333222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:02:02.120990  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:02:02.121038  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:02.143839  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 23:02:02.271420  333222 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:02:02.276720  333222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:02:02.276752  333222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:02:02.276763  333222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:02:02.276770  333222 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:02:02.276780  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:02:02.276841  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:02:02.276920  333222 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:02:02.276927  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:02:02.277027  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:02:02.294679  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:02:02.336001  333222 start.go:296] duration metric: took 215.066908ms for postStartSetup
	I0831 23:02:02.336088  333222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:02:02.336129  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:02.370725  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 23:02:02.560264  333222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:02:02.585776  333222 fix.go:56] duration metric: took 8.120296511s for fixHost
	I0831 23:02:02.585800  333222 start.go:83] releasing machines lock for "ha-330867-m03", held for 8.120349713s
	I0831 23:02:02.585874  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m03
	I0831 23:02:02.620093  333222 out.go:177] * Found network options:
	I0831 23:02:02.623640  333222 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0831 23:02:02.627317  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:02.627349  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:02.627372  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:02.627381  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 23:02:02.627447  333222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:02:02.627493  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:02.627723  333222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:02:02.627777  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 23:02:02.662946  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 23:02:02.668325  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 23:02:03.023076  333222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:02:03.035427  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:02:03.090172  333222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:02:03.090263  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:02:03.111030  333222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:02:03.111057  333222 start.go:495] detecting cgroup driver to use...
	I0831 23:02:03.111092  333222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:02:03.111145  333222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:02:03.142903  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:02:03.167304  333222 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:02:03.167373  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:02:03.194350  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:02:03.218164  333222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:02:03.440797  333222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:02:03.629672  333222 docker.go:233] disabling docker service ...
	I0831 23:02:03.629747  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:02:03.653984  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:02:03.671732  333222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:02:03.891804  333222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:02:04.068008  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:02:04.104511  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:02:04.155475  333222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:02:04.155551  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.183772  333222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:02:04.183845  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.216317  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.241312  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.272145  333222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:02:04.297354  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.326010  333222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.349659  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:04.378174  333222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:02:04.397283  333222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:02:04.422419  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:04.595971  333222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:02:05.885729  333222 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.289722059s)
	I0831 23:02:05.885796  333222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:02:05.885878  333222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:02:05.889636  333222 start.go:563] Will wait 60s for crictl version
	I0831 23:02:05.889718  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:02:05.894741  333222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:02:05.934596  333222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:02:05.934694  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:02:05.983869  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:02:06.092526  333222 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:02:06.095123  333222 out.go:177]   - env NO_PROXY=192.168.49.2
	I0831 23:02:06.097732  333222 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0831 23:02:06.100599  333222 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:02:06.118757  333222 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:02:06.123009  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:02:06.134846  333222 mustload.go:65] Loading cluster: ha-330867
	I0831 23:02:06.135100  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:06.135364  333222 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:02:06.155505  333222 host.go:66] Checking if "ha-330867" exists ...
	I0831 23:02:06.155850  333222 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.4
	I0831 23:02:06.155864  333222 certs.go:194] generating shared ca certs ...
	I0831 23:02:06.155880  333222 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:02:06.155999  333222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:02:06.156046  333222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:02:06.156060  333222 certs.go:256] generating profile certs ...
	I0831 23:02:06.156141  333222 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key
	I0831 23:02:06.156216  333222 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.007e5da7
	I0831 23:02:06.156262  333222 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key
	I0831 23:02:06.156275  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:02:06.156288  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:02:06.156302  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:02:06.156322  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:02:06.156333  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:02:06.156348  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:02:06.156360  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:02:06.156371  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:02:06.156590  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:02:06.156814  333222 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:02:06.156833  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:02:06.156869  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:02:06.156898  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:02:06.156929  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:02:06.156983  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:02:06.157019  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:06.157039  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:02:06.157055  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:02:06.157116  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:02:06.177023  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:02:06.264762  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 23:02:06.268584  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 23:02:06.283552  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 23:02:06.287710  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 23:02:06.301459  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 23:02:06.304752  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 23:02:06.327294  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 23:02:06.332148  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 23:02:06.353204  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 23:02:06.359830  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 23:02:06.374410  333222 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 23:02:06.378879  333222 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 23:02:06.394165  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:02:06.429992  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:02:06.460119  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:02:06.486159  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:02:06.513146  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0831 23:02:06.544101  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:02:06.569469  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:02:06.595684  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:02:06.621461  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:02:06.652323  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:02:06.680167  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:02:06.709483  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 23:02:06.734037  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 23:02:06.761014  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 23:02:06.781265  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 23:02:06.801631  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 23:02:06.821144  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 23:02:06.841282  333222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 23:02:06.862913  333222 ssh_runner.go:195] Run: openssl version
	I0831 23:02:06.870904  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:02:06.881530  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:02:06.885510  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:02:06.885579  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:02:06.893622  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:02:06.902856  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:02:06.912656  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:02:06.916208  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:02:06.916304  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:02:06.923547  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:02:06.932635  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:02:06.942428  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:06.946132  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:06.946229  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:06.954852  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:02:06.964180  333222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:02:06.968146  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:02:06.975556  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:02:06.982713  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:02:06.989658  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:02:06.997144  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:02:07.004665  333222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:02:07.013391  333222 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.0 crio true true} ...
	I0831 23:02:07.013520  333222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:02:07.013554  333222 kube-vip.go:115] generating kube-vip config ...
	I0831 23:02:07.013613  333222 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0831 23:02:07.029984  333222 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 23:02:07.030109  333222 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 23:02:07.030188  333222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:02:07.042520  333222 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:02:07.042657  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 23:02:07.054150  333222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 23:02:07.076299  333222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:02:07.101343  333222 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 23:02:07.126382  333222 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:02:07.130030  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:02:07.141625  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:07.255694  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:02:07.269770  333222 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:02:07.270328  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:07.273069  333222 out.go:177] * Verifying Kubernetes components...
	I0831 23:02:07.275677  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:07.407293  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:02:07.421772  333222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:02:07.422093  333222 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 23:02:07.422156  333222 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0831 23:02:07.422384  333222 node_ready.go:35] waiting up to 6m0s for node "ha-330867-m03" to be "Ready" ...
	I0831 23:02:07.422460  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:07.422471  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:07.422480  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:07.422491  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:07.425291  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:07.922755  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:07.922778  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:07.922788  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:07.922792  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:07.926125  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:08.422849  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:08.422877  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:08.422887  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:08.422891  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:08.425606  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:08.923572  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:08.923601  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:08.923610  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:08.923615  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:08.926444  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:09.422834  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:09.422859  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:09.422870  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:09.422876  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:09.425670  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:09.426504  333222 node_ready.go:53] node "ha-330867-m03" has status "Ready":"Unknown"
	I0831 23:02:09.922901  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:09.922922  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:09.922931  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:09.922936  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:09.926153  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:10.423527  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:10.423548  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:10.423558  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:10.423564  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:10.426535  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:10.922825  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:10.922859  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:10.922870  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:10.922875  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:10.926085  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:11.423459  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:11.423482  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:11.423492  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:11.423496  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:11.426256  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:11.426891  333222 node_ready.go:53] node "ha-330867-m03" has status "Ready":"Unknown"
	I0831 23:02:11.922585  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:11.922609  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:11.922621  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:11.922626  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:11.925818  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:12.423330  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:12.423377  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:12.423395  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:12.423399  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:12.426701  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:12.922548  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:12.922577  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:12.922586  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:12.922592  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:12.925632  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:13.422933  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:13.422958  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:13.422969  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:13.422974  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:13.425847  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:13.922711  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:13.922731  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:13.922742  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:13.922747  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:13.925865  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:13.926944  333222 node_ready.go:53] node "ha-330867-m03" has status "Ready":"Unknown"
	I0831 23:02:14.423197  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:14.423223  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:14.423233  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:14.423239  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:14.426122  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:14.922839  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:14.922864  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:14.922874  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:14.922879  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:14.925885  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:15.422526  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:15.422593  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:15.422617  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:15.422636  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:15.426492  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:15.923275  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:15.923296  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:15.923305  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:15.923309  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:15.926535  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:15.927153  333222 node_ready.go:53] node "ha-330867-m03" has status "Ready":"Unknown"
	I0831 23:02:16.422573  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:16.422603  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.422613  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.422618  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.427633  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:02:16.428445  333222 node_ready.go:49] node "ha-330867-m03" has status "Ready":"True"
	I0831 23:02:16.428469  333222 node_ready.go:38] duration metric: took 9.006065614s for node "ha-330867-m03" to be "Ready" ...
	I0831 23:02:16.428480  333222 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:02:16.428548  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:16.428560  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.428569  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.428575  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.436993  333222 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:02:16.447893  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:16.447996  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:16.448007  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.448017  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.448022  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.460185  333222 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0831 23:02:16.461477  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:16.461501  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.461510  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.461516  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.466547  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:16.948582  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:16.948602  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.948611  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.948618  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.952056  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:16.953450  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:16.953467  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:16.953477  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:16.953480  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:16.956913  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:17.448677  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:17.448701  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:17.448710  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:17.448715  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:17.456430  333222 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:02:17.457638  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:17.457667  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:17.457676  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:17.457680  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:17.465698  333222 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:02:17.948132  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:17.948158  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:17.948167  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:17.948173  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:17.950987  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:17.951876  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:17.951895  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:17.951904  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:17.951908  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:17.956624  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:02:18.448606  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:18.448633  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:18.448642  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:18.448648  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:18.451850  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:18.453062  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:18.453080  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:18.453089  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:18.453093  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:18.457670  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:02:18.458680  333222 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:18.948132  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:18.948156  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:18.948166  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:18.948177  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:18.951004  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:18.952079  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:18.952101  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:18.952111  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:18.952121  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:18.955114  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:19.448158  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:19.448183  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:19.448193  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:19.448197  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:19.462635  333222 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0831 23:02:19.464667  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:19.464683  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:19.464692  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:19.464695  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:19.473117  333222 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:02:19.948483  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:19.948502  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:19.948566  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:19.948572  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:19.951816  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:19.953069  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:19.953086  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:19.953095  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:19.953102  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:19.955844  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:20.449086  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:20.449146  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.449155  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.449160  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:20.452728  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:20.453799  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:20.453850  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.453872  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.453890  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:20.457805  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:20.458713  333222 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:20.948555  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:20.948579  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.948589  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:20.948593  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.954279  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:20.955220  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:20.955244  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.955253  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.955258  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:20.974231  333222 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0831 23:02:20.974780  333222 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:20.974805  333222 pod_ready.go:82] duration metric: took 4.526877584s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:20.974815  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:20.974823  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:20.974903  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-drznk
	I0831 23:02:20.974913  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.974921  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.974925  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:20.983698  333222 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:02:20.984573  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:20.984593  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:20.984602  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:20.984606  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.004740  333222 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0831 23:02:21.007992  333222 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:21.008028  333222 pod_ready.go:82] duration metric: took 33.193901ms for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:21.008040  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:21.008055  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:21.008149  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867
	I0831 23:02:21.008160  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.008168  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.008172  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.035819  333222 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0831 23:02:21.036592  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:21.036614  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.036624  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.036637  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.042205  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:21.042862  333222 pod_ready.go:98] node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:21.042891  333222 pod_ready.go:82] duration metric: took 34.827665ms for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:21.042904  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:21.042914  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:21.042999  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m02
	I0831 23:02:21.043012  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.043023  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.043041  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.053101  333222 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0831 23:02:21.053813  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:21.053834  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.053844  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.053851  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.059938  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:02:21.060547  333222 pod_ready.go:93] pod "etcd-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:21.060575  333222 pod_ready.go:82] duration metric: took 17.645646ms for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:21.060588  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:21.060662  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:21.060674  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.060683  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.060688  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.071392  333222 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0831 23:02:21.072150  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:21.072168  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.072178  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.072183  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.082358  333222 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0831 23:02:21.560821  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:21.560856  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.560866  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.560870  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.564083  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:21.565708  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:21.565729  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:21.565738  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:21.565743  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:21.568915  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:22.061714  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:22.061740  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:22.061750  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:22.061755  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:22.064964  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:22.066397  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:22.066421  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:22.066437  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:22.066447  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:22.069887  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:22.561748  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:22.561772  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:22.561782  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:22.561786  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:22.564658  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:22.565486  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:22.565504  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:22.565513  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:22.565518  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:22.568143  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:23.061364  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:23.061437  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:23.061461  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:23.061484  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:23.064747  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:23.066051  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:23.066068  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:23.066079  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:23.066084  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:23.069224  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:23.069803  333222 pod_ready.go:103] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:23.561546  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:23.561570  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:23.561578  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:23.561582  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:23.564608  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:23.565310  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:23.565358  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:23.565373  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:23.565379  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:23.568076  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:24.061611  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:24.061648  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:24.061665  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:24.061670  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:24.065327  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:24.066181  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:24.066204  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:24.066244  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:24.066255  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:24.069391  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:24.560858  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:24.560884  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:24.560893  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:24.560898  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:24.564096  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:24.566043  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:24.566068  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:24.566078  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:24.566085  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:24.569399  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:25.060887  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:25.060911  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:25.060921  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:25.060927  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:25.064303  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:25.065097  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:25.065120  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:25.065130  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:25.065135  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:25.068553  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:25.070278  333222 pod_ready.go:103] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:25.561688  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:25.561711  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:25.561720  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:25.561728  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:25.564726  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:25.565433  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:25.565450  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:25.565460  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:25.565473  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:25.567963  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:26.060781  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:26.060804  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:26.060813  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:26.060820  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:26.063784  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:26.064821  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:26.064842  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:26.064852  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:26.064856  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:26.067677  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:26.561061  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:26.561087  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:26.561096  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:26.561102  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:26.564069  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:26.564880  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:26.564910  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:26.564920  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:26.564925  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:26.567645  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:27.061698  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:27.061727  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:27.061741  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:27.061747  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:27.065552  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:27.066398  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:27.066461  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:27.066485  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:27.066506  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:27.071625  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:27.072475  333222 pod_ready.go:103] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:27.561290  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:27.561313  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:27.561322  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:27.561326  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:27.566022  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:02:27.566800  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:27.566819  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:27.566828  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:27.566833  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:27.570432  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:28.061676  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:28.061699  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:28.061710  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:28.061715  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:28.066528  333222 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:02:28.067664  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:28.067732  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:28.067755  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:28.067779  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:28.071611  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:28.560958  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:28.560985  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:28.560995  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:28.560999  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:28.564217  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:28.564900  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:28.564913  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:28.564921  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:28.564926  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:28.567447  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:29.061654  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:29.061682  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:29.061693  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:29.061697  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:29.064843  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:29.065632  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:29.065648  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:29.065657  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:29.065661  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:29.068172  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:29.560869  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:29.560894  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:29.560905  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:29.560909  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:29.564078  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:29.564945  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:29.564959  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:29.564973  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:29.564979  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:29.567729  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:29.568882  333222 pod_ready.go:103] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:30.069093  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:30.069119  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:30.069129  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:30.069134  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:30.074379  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:30.075229  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:30.075250  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:30.075267  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:30.075272  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:30.081536  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:02:30.561382  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:30.561406  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:30.561415  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:30.561420  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:30.564455  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:30.565186  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:30.565209  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:30.565218  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:30.565223  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:30.567822  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:31.061636  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:31.061712  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:31.061746  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:31.061767  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:31.068562  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:02:31.070772  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:31.070839  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:31.070874  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:31.070896  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:31.074275  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:31.561758  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:31.561781  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:31.561791  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:31.561796  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:31.565786  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:31.566841  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:31.566864  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:31.566873  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:31.566877  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:31.576331  333222 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 23:02:31.580877  333222 pod_ready.go:103] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:32.061685  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:32.061709  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.061720  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.061729  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.065054  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.065859  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:32.065879  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.065890  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.065896  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.069214  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.560853  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:32.560873  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.560883  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.560888  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.564756  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.565924  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:32.565945  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.565955  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.565959  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.572569  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:02:32.573378  333222 pod_ready.go:93] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:32.573400  333222 pod_ready.go:82] duration metric: took 11.51280481s for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.573423  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.573497  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:02:32.573507  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.573517  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.573521  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.576387  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.577153  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:32.577170  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.577180  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.577187  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.580045  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.580739  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:32.580766  333222 pod_ready.go:82] duration metric: took 7.331603ms for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:32.580794  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:32.580803  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.580890  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:02:32.580898  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.580907  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.580912  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.583553  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.584504  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:32.584526  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.584535  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.584539  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.587217  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.587916  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:32.587938  333222 pod_ready.go:82] duration metric: took 7.119354ms for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.587952  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.588057  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:02:32.588067  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.588076  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.588079  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.591099  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.591980  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:32.592001  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.592011  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.592015  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.595236  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.595870  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:32.595895  333222 pod_ready.go:82] duration metric: took 7.919308ms for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.595907  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.595976  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:02:32.595989  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.595997  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.596001  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.599473  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.600401  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:32.600455  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.600466  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.600471  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.603269  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.604068  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:32.604096  333222 pod_ready.go:82] duration metric: took 8.17543ms for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:32.604106  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:32.604114  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.761324  333222 request.go:632] Waited for 157.143688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:02:32.761468  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:02:32.761495  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.761517  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.761537  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.764647  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:32.961112  333222 request.go:632] Waited for 195.295807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:32.961184  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:32.961195  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:32.961205  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:32.961210  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:32.964052  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:32.964649  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:32.964678  333222 pod_ready.go:82] duration metric: took 360.555897ms for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:32.964690  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:33.161730  333222 request.go:632] Waited for 196.929069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:02:33.161799  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:02:33.161808  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:33.161817  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:33.161832  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:33.165062  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:33.361493  333222 request.go:632] Waited for 195.351174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:33.361570  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:33.361583  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:33.361592  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:33.361596  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:33.364639  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:33.365780  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:33.365803  333222 pod_ready.go:82] duration metric: took 401.104631ms for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:33.365815  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:33.561673  333222 request.go:632] Waited for 195.76988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:02:33.561748  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:02:33.561759  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:33.561774  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:33.561782  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:33.564836  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:33.761759  333222 request.go:632] Waited for 196.178969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:33.761823  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:33.761834  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:33.761853  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:33.761867  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:33.765046  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:33.766192  333222 pod_ready.go:93] pod "kube-proxy-2km6v" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:33.766217  333222 pod_ready.go:82] duration metric: took 400.372852ms for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:33.766229  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:33.961659  333222 request.go:632] Waited for 195.323056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:33.961722  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:33.961732  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:33.961741  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:33.961751  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:33.965341  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:34.161601  333222 request.go:632] Waited for 195.291343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:34.161686  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:34.161712  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:34.161724  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:34.161729  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:34.164749  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:34.165314  333222 pod_ready.go:98] node "ha-330867-m04" hosting pod "kube-proxy-5n584" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:02:34.165336  333222 pod_ready.go:82] duration metric: took 399.077967ms for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:34.165347  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m04" hosting pod "kube-proxy-5n584" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:02:34.165355  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:34.361670  333222 request.go:632] Waited for 196.230883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:02:34.361764  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:02:34.361774  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:34.361782  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:34.361790  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:34.364769  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:34.561676  333222 request.go:632] Waited for 196.212249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:34.561762  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:34.561797  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:34.561842  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:34.561879  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:34.564887  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:34.565705  333222 pod_ready.go:93] pod "kube-proxy-72g7x" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:34.565727  333222 pod_ready.go:82] duration metric: took 400.361866ms for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:34.565739  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:34.761584  333222 request.go:632] Waited for 195.762192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:02:34.761663  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:02:34.761699  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:34.761714  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:34.761720  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:34.764746  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:34.961837  333222 request.go:632] Waited for 196.329204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:34.961897  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:34.961909  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:34.961918  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:34.961928  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:34.964941  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:34.965597  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:34.965619  333222 pod_ready.go:82] duration metric: took 399.872096ms for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:34.965630  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:34.965657  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:35.161403  333222 request.go:632] Waited for 195.66968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:02:35.161468  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:02:35.161480  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:35.161494  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:35.161503  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:35.164816  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:35.361792  333222 request.go:632] Waited for 196.342956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:35.361854  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:35.361866  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:35.361875  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:35.361883  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:35.365067  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:35.365723  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:35.365746  333222 pod_ready.go:82] duration metric: took 400.074901ms for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:35.365777  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:35.365786  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:35.561601  333222 request.go:632] Waited for 195.741097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:02:35.561668  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:02:35.561678  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:35.561687  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:35.561695  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:35.564814  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:35.761766  333222 request.go:632] Waited for 196.323248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:35.761886  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:35.761902  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:35.761911  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:35.761917  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:35.764463  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:35.765251  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:35.765298  333222 pod_ready.go:82] duration metric: took 399.499159ms for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:35.765316  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:35.961195  333222 request.go:632] Waited for 195.809503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:02:35.961284  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:02:35.961296  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:35.961306  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:35.961312  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:35.972114  333222 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0831 23:02:36.161604  333222 request.go:632] Waited for 188.362126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:36.161726  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:36.161758  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.161785  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.161807  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.164850  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:36.165365  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:36.165389  333222 pod_ready.go:82] duration metric: took 400.063718ms for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:36.165403  333222 pod_ready.go:39] duration metric: took 19.736908671s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:02:36.165424  333222 api_server.go:52] waiting for apiserver process to appear ...
	I0831 23:02:36.165499  333222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:02:36.226268  333222 api_server.go:72] duration metric: took 28.956421858s to wait for apiserver process to appear ...
	I0831 23:02:36.226304  333222 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:02:36.226327  333222 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:02:36.236013  333222 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 23:02:36.236111  333222 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0831 23:02:36.236130  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.236141  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.236154  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.237209  333222 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0831 23:02:36.237309  333222 api_server.go:141] control plane version: v1.31.0
	I0831 23:02:36.237326  333222 api_server.go:131] duration metric: took 11.01365ms to wait for apiserver health ...
	I0831 23:02:36.237335  333222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 23:02:36.361687  333222 request.go:632] Waited for 124.283372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:36.361768  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:36.361777  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.361785  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.361795  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.367696  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:36.377898  333222 system_pods.go:59] 26 kube-system pods found
	I0831 23:02:36.377938  333222 system_pods.go:61] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running
	I0831 23:02:36.377945  333222 system_pods.go:61] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running
	I0831 23:02:36.377950  333222 system_pods.go:61] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:02:36.377954  333222 system_pods.go:61] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:02:36.377959  333222 system_pods.go:61] "etcd-ha-330867-m03" [0688d1d4-4d26-496b-9ee4-8693edd282a8] Running
	I0831 23:02:36.377963  333222 system_pods.go:61] "kindnet-6wdgz" [497cc13e-5136-42e9-ba89-f71d53d3c0bc] Running
	I0831 23:02:36.377967  333222 system_pods.go:61] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:02:36.377972  333222 system_pods.go:61] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:02:36.377976  333222 system_pods.go:61] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:02:36.377986  333222 system_pods.go:61] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running
	I0831 23:02:36.377991  333222 system_pods.go:61] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:02:36.378003  333222 system_pods.go:61] "kube-apiserver-ha-330867-m03" [db86dc83-44c0-44fd-978c-d924a8207e12] Running
	I0831 23:02:36.378007  333222 system_pods.go:61] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running
	I0831 23:02:36.378011  333222 system_pods.go:61] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:02:36.378016  333222 system_pods.go:61] "kube-controller-manager-ha-330867-m03" [aaee5c81-aef3-4507-b706-1d6735f176c8] Running
	I0831 23:02:36.378025  333222 system_pods.go:61] "kube-proxy-2km6v" [31c060ec-f4ae-400a-85b9-6dfadada3a5c] Running
	I0831 23:02:36.378029  333222 system_pods.go:61] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:02:36.378033  333222 system_pods.go:61] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:02:36.378040  333222 system_pods.go:61] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:02:36.378045  333222 system_pods.go:61] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:02:36.378059  333222 system_pods.go:61] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:02:36.378063  333222 system_pods.go:61] "kube-scheduler-ha-330867-m03" [d0436eba-7021-46d0-bf6e-b414cb5fbfde] Running
	I0831 23:02:36.378066  333222 system_pods.go:61] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:02:36.378072  333222 system_pods.go:61] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:02:36.378078  333222 system_pods.go:61] "kube-vip-ha-330867-m03" [ccbc0f53-8220-4eb8-9ebc-9b8c96d58838] Running
	I0831 23:02:36.378083  333222 system_pods.go:61] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:02:36.378091  333222 system_pods.go:74] duration metric: took 140.751195ms to wait for pod list to return data ...
	I0831 23:02:36.378100  333222 default_sa.go:34] waiting for default service account to be created ...
	I0831 23:02:36.561549  333222 request.go:632] Waited for 183.36212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0831 23:02:36.561630  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0831 23:02:36.561642  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.561651  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.561660  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.564774  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:36.565050  333222 default_sa.go:45] found service account: "default"
	I0831 23:02:36.565072  333222 default_sa.go:55] duration metric: took 186.964218ms for default service account to be created ...
	I0831 23:02:36.565082  333222 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 23:02:36.761486  333222 request.go:632] Waited for 196.333364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:36.761548  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:36.761560  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.761569  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.761576  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.767473  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:36.777996  333222 system_pods.go:86] 26 kube-system pods found
	I0831 23:02:36.778039  333222 system_pods.go:89] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running
	I0831 23:02:36.778058  333222 system_pods.go:89] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running
	I0831 23:02:36.778084  333222 system_pods.go:89] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:02:36.778101  333222 system_pods.go:89] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:02:36.778106  333222 system_pods.go:89] "etcd-ha-330867-m03" [0688d1d4-4d26-496b-9ee4-8693edd282a8] Running
	I0831 23:02:36.778111  333222 system_pods.go:89] "kindnet-6wdgz" [497cc13e-5136-42e9-ba89-f71d53d3c0bc] Running
	I0831 23:02:36.778125  333222 system_pods.go:89] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:02:36.778129  333222 system_pods.go:89] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:02:36.778134  333222 system_pods.go:89] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:02:36.778141  333222 system_pods.go:89] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running
	I0831 23:02:36.778175  333222 system_pods.go:89] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:02:36.778193  333222 system_pods.go:89] "kube-apiserver-ha-330867-m03" [db86dc83-44c0-44fd-978c-d924a8207e12] Running
	I0831 23:02:36.778214  333222 system_pods.go:89] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running
	I0831 23:02:36.778220  333222 system_pods.go:89] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:02:36.778233  333222 system_pods.go:89] "kube-controller-manager-ha-330867-m03" [aaee5c81-aef3-4507-b706-1d6735f176c8] Running
	I0831 23:02:36.778262  333222 system_pods.go:89] "kube-proxy-2km6v" [31c060ec-f4ae-400a-85b9-6dfadada3a5c] Running
	I0831 23:02:36.778270  333222 system_pods.go:89] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:02:36.778274  333222 system_pods.go:89] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:02:36.778284  333222 system_pods.go:89] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:02:36.778289  333222 system_pods.go:89] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:02:36.778294  333222 system_pods.go:89] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:02:36.778301  333222 system_pods.go:89] "kube-scheduler-ha-330867-m03" [d0436eba-7021-46d0-bf6e-b414cb5fbfde] Running
	I0831 23:02:36.778306  333222 system_pods.go:89] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:02:36.778309  333222 system_pods.go:89] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:02:36.778313  333222 system_pods.go:89] "kube-vip-ha-330867-m03" [ccbc0f53-8220-4eb8-9ebc-9b8c96d58838] Running
	I0831 23:02:36.778317  333222 system_pods.go:89] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:02:36.778328  333222 system_pods.go:126] duration metric: took 213.240185ms to wait for k8s-apps to be running ...
	I0831 23:02:36.778341  333222 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 23:02:36.778403  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:02:36.794134  333222 system_svc.go:56] duration metric: took 15.784078ms WaitForService to wait for kubelet
	I0831 23:02:36.794166  333222 kubeadm.go:582] duration metric: took 29.524326001s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:02:36.794187  333222 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:02:36.961526  333222 request.go:632] Waited for 167.229311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0831 23:02:36.961614  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0831 23:02:36.961626  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:36.961636  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:36.961642  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:36.965293  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:36.966938  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:02:36.966988  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:02:36.967002  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:02:36.967007  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:02:36.967012  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:02:36.967047  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:02:36.967052  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:02:36.967057  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:02:36.967062  333222 node_conditions.go:105] duration metric: took 172.83737ms to run NodePressure ...
	I0831 23:02:36.967078  333222 start.go:241] waiting for startup goroutines ...
	I0831 23:02:36.967107  333222 start.go:255] writing updated cluster config ...
	I0831 23:02:36.969141  333222 out.go:201] 
	I0831 23:02:36.971064  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:36.971212  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:02:36.973209  333222 out.go:177] * Starting "ha-330867-m04" worker node in "ha-330867" cluster
	I0831 23:02:36.974606  333222 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:02:36.976311  333222 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:02:36.977432  333222 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:02:36.977469  333222 cache.go:56] Caching tarball of preloaded images
	I0831 23:02:36.977521  333222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:02:36.977598  333222 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:02:36.977610  333222 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:02:36.977748  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:02:37.000479  333222 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:02:37.000500  333222 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:02:37.000604  333222 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:02:37.000627  333222 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:02:37.000632  333222 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:02:37.000645  333222 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:02:37.000655  333222 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:02:37.001868  333222 image.go:273] response: 
	I0831 23:02:37.155688  333222 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:02:37.155732  333222 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:02:37.155764  333222 start.go:360] acquireMachinesLock for ha-330867-m04: {Name:mk08f642f0ee1abb65ae3ac6825e6c93f3c32dce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:02:37.155840  333222 start.go:364] duration metric: took 56.081µs to acquireMachinesLock for "ha-330867-m04"
	I0831 23:02:37.155862  333222 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:02:37.155867  333222 fix.go:54] fixHost starting: m04
	I0831 23:02:37.156147  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 23:02:37.175983  333222 fix.go:112] recreateIfNeeded on ha-330867-m04: state=Stopped err=<nil>
	W0831 23:02:37.176014  333222 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:02:37.178934  333222 out.go:177] * Restarting existing docker container for "ha-330867-m04" ...
	I0831 23:02:37.180978  333222 cli_runner.go:164] Run: docker start ha-330867-m04
	I0831 23:02:37.518183  333222 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 23:02:37.538153  333222 kic.go:435] container "ha-330867-m04" state is running.
	I0831 23:02:37.538551  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:02:37.559844  333222 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:02:37.560096  333222 machine.go:93] provisionDockerMachine start ...
	I0831 23:02:37.560158  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:37.585967  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:02:37.586224  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0831 23:02:37.586235  333222 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:02:37.587388  333222 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 23:02:40.728282  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m04
	
	I0831 23:02:40.728307  333222 ubuntu.go:169] provisioning hostname "ha-330867-m04"
	I0831 23:02:40.728434  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:40.758038  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:02:40.758297  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0831 23:02:40.758313  333222 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867-m04 && echo "ha-330867-m04" | sudo tee /etc/hostname
	I0831 23:02:40.912297  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m04
	
	I0831 23:02:40.912383  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:40.931967  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:02:40.932269  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0831 23:02:40.932298  333222 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:02:41.073116  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:02:41.073142  333222 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:02:41.073161  333222 ubuntu.go:177] setting up certificates
	I0831 23:02:41.073172  333222 provision.go:84] configureAuth start
	I0831 23:02:41.073237  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:02:41.103059  333222 provision.go:143] copyHostCerts
	I0831 23:02:41.103105  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:02:41.103142  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:02:41.103153  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:02:41.103241  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:02:41.103333  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:02:41.103368  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:02:41.103382  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:02:41.103417  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:02:41.103464  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:02:41.103485  333222 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:02:41.103490  333222 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:02:41.103518  333222 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:02:41.103572  333222 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867-m04 san=[127.0.0.1 192.168.49.5 ha-330867-m04 localhost minikube]
	I0831 23:02:41.353877  333222 provision.go:177] copyRemoteCerts
	I0831 23:02:41.354054  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:02:41.354135  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:41.375881  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:02:41.490264  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:02:41.490332  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:02:41.520853  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:02:41.520916  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 23:02:41.551536  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:02:41.551601  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:02:41.577723  333222 provision.go:87] duration metric: took 504.53596ms to configureAuth
	I0831 23:02:41.577750  333222 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:02:41.578033  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:41.578155  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:41.595713  333222 main.go:141] libmachine: Using SSH client type: native
	I0831 23:02:41.595958  333222 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33188 <nil> <nil>}
	I0831 23:02:41.595980  333222 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:02:41.902111  333222 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:02:41.902139  333222 machine.go:96] duration metric: took 4.342029086s to provisionDockerMachine
	I0831 23:02:41.902152  333222 start.go:293] postStartSetup for "ha-330867-m04" (driver="docker")
	I0831 23:02:41.902164  333222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:02:41.902231  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:02:41.902281  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:41.928018  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:02:42.037495  333222 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:02:42.046407  333222 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:02:42.046441  333222 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:02:42.046452  333222 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:02:42.046459  333222 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:02:42.046470  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:02:42.046533  333222 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:02:42.046610  333222 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:02:42.046618  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:02:42.046720  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:02:42.059432  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:02:42.094828  333222 start.go:296] duration metric: took 192.660269ms for postStartSetup
	I0831 23:02:42.094921  333222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:02:42.094970  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:42.116454  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:02:42.218656  333222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:02:42.230770  333222 fix.go:56] duration metric: took 5.074893624s for fixHost
	I0831 23:02:42.230798  333222 start.go:83] releasing machines lock for "ha-330867-m04", held for 5.074947868s
	I0831 23:02:42.230880  333222 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:02:42.264621  333222 out.go:177] * Found network options:
	I0831 23:02:42.267287  333222 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W0831 23:02:42.269830  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:42.269873  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:42.269883  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:42.269915  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:42.269932  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:02:42.269941  333222 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 23:02:42.270026  333222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:02:42.270153  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:42.270252  333222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:02:42.270312  333222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:02:42.304493  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:02:42.333822  333222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:02:42.601715  333222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:02:42.609155  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:02:42.618539  333222 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:02:42.618633  333222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:02:42.627685  333222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:02:42.627708  333222 start.go:495] detecting cgroup driver to use...
	I0831 23:02:42.627741  333222 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:02:42.627787  333222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:02:42.643272  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:02:42.655100  333222 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:02:42.655171  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:02:42.669155  333222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:02:42.681663  333222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:02:42.809863  333222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:02:42.913563  333222 docker.go:233] disabling docker service ...
	I0831 23:02:42.913636  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:02:42.928807  333222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:02:42.941267  333222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:02:43.053562  333222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:02:43.155573  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:02:43.168896  333222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:02:43.188034  333222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:02:43.188169  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.199442  333222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:02:43.199515  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.211282  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.221778  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.235299  333222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:02:43.250216  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.262642  333222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.274261  333222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:02:43.296661  333222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:02:43.306276  333222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:02:43.315248  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:43.418401  333222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:02:43.564855  333222 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:02:43.564976  333222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:02:43.568695  333222 start.go:563] Will wait 60s for crictl version
	I0831 23:02:43.568826  333222 ssh_runner.go:195] Run: which crictl
	I0831 23:02:43.572996  333222 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:02:43.619652  333222 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:02:43.619737  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:02:43.659870  333222 ssh_runner.go:195] Run: crio --version
	I0831 23:02:43.702382  333222 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:02:43.705040  333222 out.go:177]   - env NO_PROXY=192.168.49.2
	I0831 23:02:43.707945  333222 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0831 23:02:43.710328  333222 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I0831 23:02:43.712764  333222 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:02:43.730912  333222 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:02:43.734819  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:02:43.755464  333222 mustload.go:65] Loading cluster: ha-330867
	I0831 23:02:43.755714  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:43.755977  333222 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:02:43.777533  333222 host.go:66] Checking if "ha-330867" exists ...
	I0831 23:02:43.777817  333222 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.5
	I0831 23:02:43.777831  333222 certs.go:194] generating shared ca certs ...
	I0831 23:02:43.777846  333222 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:02:43.777968  333222 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:02:43.778018  333222 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:02:43.778033  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:02:43.778050  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:02:43.778063  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:02:43.778088  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:02:43.778183  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:02:43.778226  333222 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:02:43.778240  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:02:43.778266  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:02:43.778295  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:02:43.778320  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:02:43.778394  333222 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:02:43.778433  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:43.778467  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:02:43.778487  333222 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:02:43.778509  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:02:43.809807  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:02:43.845477  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:02:43.878493  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:02:43.906025  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:02:43.932337  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:02:43.959001  333222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:02:43.986122  333222 ssh_runner.go:195] Run: openssl version
	I0831 23:02:43.993682  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:02:44.004950  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:44.013727  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:44.013826  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:02:44.022749  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:02:44.033253  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:02:44.045611  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:02:44.049892  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:02:44.049983  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:02:44.058279  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:02:44.073958  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:02:44.087291  333222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:02:44.092027  333222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:02:44.092160  333222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:02:44.100932  333222 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:02:44.111561  333222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:02:44.116152  333222 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 23:02:44.116238  333222 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0831 23:02:44.116341  333222 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:02:44.116464  333222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:02:44.126602  333222 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:02:44.126674  333222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0831 23:02:44.137646  333222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 23:02:44.158624  333222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:02:44.181016  333222 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:02:44.185407  333222 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:02:44.201084  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:44.306628  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:02:44.320662  333222 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0831 23:02:44.321107  333222 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:02:44.324093  333222 out.go:177] * Verifying Kubernetes components...
	I0831 23:02:44.326777  333222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:02:44.422895  333222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:02:44.436295  333222 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:02:44.436660  333222 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 23:02:44.436729  333222 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0831 23:02:44.436942  333222 node_ready.go:35] waiting up to 6m0s for node "ha-330867-m04" to be "Ready" ...
	I0831 23:02:44.437034  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:44.437046  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:44.437055  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:44.437061  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:44.439837  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:44.937727  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:44.937749  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:44.937759  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:44.937763  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:44.940767  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:45.437787  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:45.437811  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:45.437821  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:45.437825  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:45.440811  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:45.937233  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:45.937255  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:45.937266  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:45.937270  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:45.940080  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:46.437861  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:46.437885  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:46.437894  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:46.437901  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:46.441331  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:46.442178  333222 node_ready.go:53] node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:02:46.937379  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:46.937407  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:46.937417  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:46.937423  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:46.940297  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:47.438166  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:47.438231  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:47.438256  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:47.438277  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:47.441721  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:47.937211  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:47.937243  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:47.937255  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:47.937267  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:47.940353  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:48.437622  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:48.437643  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:48.437652  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:48.437657  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:48.440331  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:48.937328  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:48.937352  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:48.937359  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:48.937363  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:48.940071  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:48.940996  333222 node_ready.go:53] node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:02:49.437789  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:49.437815  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:49.437824  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:49.437829  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:49.440685  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:49.937288  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:49.937308  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:49.937317  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:49.937321  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:49.940045  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:50.438115  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:50.438139  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:50.438149  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:50.438154  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:50.441126  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:50.937659  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:50.937686  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:50.937696  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:50.937700  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:50.940797  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:50.941551  333222 node_ready.go:53] node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:02:51.437233  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:51.437261  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.437270  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.437275  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.440152  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.937248  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:51.937283  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.937293  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.937298  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.940190  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.940857  333222 node_ready.go:49] node "ha-330867-m04" has status "Ready":"True"
	I0831 23:02:51.940876  333222 node_ready.go:38] duration metric: took 7.503916753s for node "ha-330867-m04" to be "Ready" ...
	I0831 23:02:51.940886  333222 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:02:51.940954  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:02:51.940968  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.940977  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.940982  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.946598  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:51.956279  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:51.956398  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:02:51.956426  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.956438  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.956443  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.959488  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:51.960628  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:51.960648  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.960656  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.960661  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.963348  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.963876  333222 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.963891  333222 pod_ready.go:82] duration metric: took 7.571889ms for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:51.963901  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.963916  333222 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:51.963988  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-drznk
	I0831 23:02:51.963994  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.964001  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.964006  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.967217  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:51.968003  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:51.968023  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.968034  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.968040  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.970778  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.971559  333222 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.971592  333222 pod_ready.go:82] duration metric: took 7.666789ms for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:51.971604  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.971613  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:51.971695  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867
	I0831 23:02:51.971711  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.971725  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.971738  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.974752  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.975706  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:51.975729  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.975738  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.975743  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.978384  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.979133  333222 pod_ready.go:98] node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.979157  333222 pod_ready.go:82] duration metric: took 7.532644ms for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:51.979168  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:51.979175  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:51.979239  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m02
	I0831 23:02:51.979252  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.979260  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.979263  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.981888  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:51.982624  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:51.982641  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:51.982657  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:51.982666  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:51.986217  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:51.986935  333222 pod_ready.go:93] pod "etcd-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:51.986959  333222 pod_ready.go:82] duration metric: took 7.776515ms for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:51.986971  333222 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:52.137244  333222 request.go:632] Waited for 150.206489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:52.137359  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:02:52.137374  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:52.138087  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:52.138095  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:52.141123  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:52.337254  333222 request.go:632] Waited for 195.260454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:52.337325  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:52.337336  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:52.337345  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:52.337353  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:52.340879  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:52.341528  333222 pod_ready.go:93] pod "etcd-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:52.341548  333222 pod_ready.go:82] duration metric: took 354.568801ms for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:52.341571  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:52.537508  333222 request.go:632] Waited for 195.870199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:02:52.537573  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:02:52.537585  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:52.537596  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:52.537602  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:52.541115  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:52.737252  333222 request.go:632] Waited for 195.306525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:52.737324  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:52.737337  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:52.737346  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:52.737354  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:52.740494  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:52.741156  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:52.741182  333222 pod_ready.go:82] duration metric: took 399.603009ms for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:52.741202  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:52.741213  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:52.937900  333222 request.go:632] Waited for 196.580758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:02:52.937982  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:02:52.937995  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:52.938005  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:52.938010  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:52.940990  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:53.137265  333222 request.go:632] Waited for 195.245964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:53.137395  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:53.137410  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:53.137419  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:53.137423  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:53.140262  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:53.140859  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:53.140881  333222 pod_ready.go:82] duration metric: took 399.653404ms for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:53.140893  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:53.337690  333222 request.go:632] Waited for 196.709808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:02:53.337749  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:02:53.337756  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:53.337770  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:53.337777  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:53.340561  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:53.537898  333222 request.go:632] Waited for 196.309047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:53.537953  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:53.537960  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:53.537969  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:53.537974  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:53.541112  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:53.541744  333222 pod_ready.go:93] pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:53.541769  333222 pod_ready.go:82] duration metric: took 400.867535ms for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:53.541783  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:53.737186  333222 request.go:632] Waited for 195.331896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:02:53.737267  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:02:53.737296  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:53.737313  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:53.737320  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:53.740286  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:53.937494  333222 request.go:632] Waited for 196.321856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:53.937564  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:02:53.937576  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:53.937585  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:53.937588  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:53.940486  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:53.941449  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:53.941472  333222 pod_ready.go:82] duration metric: took 399.680005ms for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:02:53.941482  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:02:53.941490  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:54.137869  333222 request.go:632] Waited for 196.304649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:02:54.137993  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:02:54.138007  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:54.138017  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:54.138022  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:54.141039  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:54.337333  333222 request.go:632] Waited for 195.258657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:54.337407  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:02:54.337415  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:54.337423  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:54.337431  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:54.340186  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:54.341044  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:54.341104  333222 pod_ready.go:82] duration metric: took 399.606061ms for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:54.341131  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:54.537523  333222 request.go:632] Waited for 196.297208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:02:54.537627  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:02:54.537638  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:54.537649  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:54.537654  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:54.540984  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:54.737877  333222 request.go:632] Waited for 196.143723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:54.737962  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:54.737975  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:54.737984  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:54.737989  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:54.741080  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:54.741845  333222 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:54.741894  333222 pod_ready.go:82] duration metric: took 400.740955ms for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:54.741918  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:54.937249  333222 request.go:632] Waited for 195.246744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:02:54.937314  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:02:54.937320  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:54.937331  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:54.937336  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:54.940325  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:55.137704  333222 request.go:632] Waited for 196.32901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:55.137787  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:02:55.137799  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:55.137808  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:55.137813  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:55.140977  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:55.141808  333222 pod_ready.go:93] pod "kube-proxy-2km6v" in "kube-system" namespace has status "Ready":"True"
	I0831 23:02:55.141834  333222 pod_ready.go:82] duration metric: took 399.906777ms for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:55.141847  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:02:55.338097  333222 request.go:632] Waited for 196.171646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:55.338225  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:55.338237  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:55.338246  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:55.338256  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:55.345258  333222 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:02:55.537418  333222 request.go:632] Waited for 191.458408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:55.537491  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:55.537507  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:55.537516  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:55.537524  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:55.540570  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:55.738247  333222 request.go:632] Waited for 96.249878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:55.738313  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:55.738324  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:55.738333  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:55.738339  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:55.741267  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:55.937502  333222 request.go:632] Waited for 195.346181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:55.937578  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:55.937584  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:55.937593  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:55.937646  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:55.940499  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:56.142569  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:56.142587  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:56.142596  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:56.142601  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:56.153725  333222 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 23:02:56.337740  333222 request.go:632] Waited for 182.199702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:56.337933  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:56.337962  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:56.338014  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:56.338042  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:56.343705  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:02:56.642294  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:56.642317  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:56.642326  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:56.642332  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:56.645391  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:56.737438  333222 request.go:632] Waited for 91.251875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:56.737554  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:56.737565  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:56.737605  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:56.737611  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:56.740712  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:57.142819  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:57.142843  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:57.142854  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:57.142858  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:57.145869  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:57.146995  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:57.147017  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:57.147026  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:57.147032  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:57.149631  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:57.150298  333222 pod_ready.go:103] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"False"
	I0831 23:02:57.642691  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:57.642758  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:57.642781  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:57.642788  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:57.646116  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:57.647088  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:57.647153  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:57.647169  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:57.647175  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:57.650059  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:58.142102  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:58.142128  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:58.142138  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:58.142145  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:58.145179  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:58.146057  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:58.146085  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:58.146095  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:58.146101  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:58.148853  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:58.642315  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:58.642337  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:58.642347  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:58.642352  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:58.645924  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:58.647120  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:58.647141  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:58.647152  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:58.647157  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:58.649735  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:59.142472  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:59.142494  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:59.142504  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:59.142508  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:59.145556  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:59.146290  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:59.146338  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:59.146355  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:59.146360  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:59.149142  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:02:59.642105  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:02:59.642129  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:59.642140  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:59.642146  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:59.645266  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:59.646079  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:02:59.646105  333222 round_trippers.go:469] Request Headers:
	I0831 23:02:59.646115  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:02:59.646118  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:02:59.649145  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:02:59.649650  333222 pod_ready.go:103] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"False"
	I0831 23:03:00.195478  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:00.195511  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:00.195520  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:00.195525  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:00.235740  333222 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0831 23:03:00.237258  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:00.237280  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:00.237290  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:00.237296  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:00.268676  333222 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0831 23:03:00.642138  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:00.642163  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:00.642175  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:00.642181  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:00.645423  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:00.646196  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:00.646216  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:00.646225  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:00.646230  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:00.649133  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:01.142071  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:01.142102  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:01.142113  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:01.142118  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:01.145447  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:01.146192  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:01.146212  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:01.146221  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:01.146225  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:01.149639  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:01.642126  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:01.642153  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:01.642164  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:01.642168  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:01.645543  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:01.646630  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:01.646659  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:01.646669  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:01.646672  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:01.649380  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:01.649894  333222 pod_ready.go:103] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"False"
	I0831 23:03:02.142177  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:02.142202  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:02.142218  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:02.142223  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:02.145180  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:02.145996  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:02.146016  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:02.146025  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:02.146029  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:02.148844  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:02.642316  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:02.642340  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:02.642350  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:02.642355  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:02.645314  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:02.646215  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:02.646236  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:02.646245  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:02.646251  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:02.649001  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:03.142856  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:03.142883  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:03.142897  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:03.142903  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:03.146246  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:03.147011  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:03.147032  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:03.147041  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:03.147045  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:03.149661  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:03.642074  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:03.642105  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:03.642115  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:03.642120  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:03.645233  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:03.646041  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:03.646062  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:03.646074  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:03.646079  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:03.648749  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:04.142535  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:04.142560  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:04.142570  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:04.142576  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:04.145664  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:04.146480  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:04.146499  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:04.146508  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:04.146514  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:04.149253  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:04.149810  333222 pod_ready.go:103] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"False"
	I0831 23:03:04.642885  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:04.642906  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:04.642916  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:04.642922  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:04.645874  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:04.646621  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:04.646641  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:04.646651  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:04.646656  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:04.649174  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:05.142138  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:05.142165  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:05.142175  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:05.142180  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:05.145023  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:05.146276  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:05.146334  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:05.146352  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:05.146356  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:05.149286  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:05.642926  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:05.642958  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:05.642981  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:05.642986  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:05.646184  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:05.646889  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:05.646911  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:05.646920  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:05.646925  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:05.649820  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:06.142015  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:06.142042  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:06.142051  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:06.142056  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:06.145231  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:06.146038  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:06.146055  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:06.146064  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:06.146069  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:06.148909  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:06.642117  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:06.642142  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:06.642152  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:06.642156  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:06.645031  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:06.646099  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:06.646119  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:06.646129  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:06.646135  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:06.648939  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:06.649466  333222 pod_ready.go:103] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"False"
	I0831 23:03:07.142279  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:07.142302  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:07.142312  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:07.142317  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:07.145396  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:07.146411  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:07.146431  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:07.146441  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:07.146445  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:07.148857  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:07.643236  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:07.643264  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:07.643274  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:07.643280  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:07.649035  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:03:07.650157  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:07.650178  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:07.650188  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:07.650194  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:07.657263  333222 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:03:08.142383  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:03:08.142409  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.142419  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.142423  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.145189  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.145903  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:03:08.145918  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.145927  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.145931  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.149041  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:08.149711  333222 pod_ready.go:93] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"True"
	I0831 23:03:08.149734  333222 pod_ready.go:82] duration metric: took 13.00787843s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.149746  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.149812  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:03:08.149823  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.149831  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.149836  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.152881  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:08.153723  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:03:08.153742  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.153751  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.153755  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.156469  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.157114  333222 pod_ready.go:93] pod "kube-proxy-72g7x" in "kube-system" namespace has status "Ready":"True"
	I0831 23:03:08.157138  333222 pod_ready.go:82] duration metric: took 7.383853ms for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.157150  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.157212  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:03:08.157222  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.157230  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.157234  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.160123  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.160829  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:03:08.160883  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.160899  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.160905  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.163664  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.164448  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:03:08.164472  333222 pod_ready.go:82] duration metric: took 7.315004ms for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	E0831 23:03:08.164483  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:03:08.164491  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.164564  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:03:08.164575  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.164584  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.164589  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.167635  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:08.168248  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:03:08.168319  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.168357  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.168385  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.171837  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:08.172561  333222 pod_ready.go:98] node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:03:08.172596  333222 pod_ready.go:82] duration metric: took 8.092379ms for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:03:08.172609  333222 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:03:08.172616  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.172696  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:03:08.172705  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.172721  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.172732  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.176144  333222 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:03:08.176947  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:03:08.176968  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.176977  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.176982  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.179773  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.180399  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:03:08.180482  333222 pod_ready.go:82] duration metric: took 7.854571ms for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.180501  333222 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.342839  333222 request.go:632] Waited for 162.263779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:03:08.342927  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:03:08.342939  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.342948  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.342957  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.345778  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.543408  333222 request.go:632] Waited for 196.945985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:03:08.543563  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:03:08.543604  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.543629  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.543648  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.546650  333222 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:03:08.547190  333222 pod_ready.go:93] pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace has status "Ready":"True"
	I0831 23:03:08.547213  333222 pod_ready.go:82] duration metric: took 366.702671ms for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:03:08.547227  333222 pod_ready.go:39] duration metric: took 16.606330238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:03:08.547246  333222 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 23:03:08.547306  333222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:03:08.559544  333222 system_svc.go:56] duration metric: took 12.288589ms WaitForService to wait for kubelet
	I0831 23:03:08.559575  333222 kubeadm.go:582] duration metric: took 24.238866668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:03:08.559600  333222 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:03:08.743018  333222 request.go:632] Waited for 183.325293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0831 23:03:08.743074  333222 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0831 23:03:08.743081  333222 round_trippers.go:469] Request Headers:
	I0831 23:03:08.743090  333222 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:03:08.743097  333222 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:03:08.749084  333222 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0831 23:03:08.750777  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:03:08.750809  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:03:08.750821  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:03:08.750827  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:03:08.750832  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:03:08.750836  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:03:08.750859  333222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:03:08.750870  333222 node_conditions.go:123] node cpu capacity is 2
	I0831 23:03:08.750875  333222 node_conditions.go:105] duration metric: took 191.270644ms to run NodePressure ...
	I0831 23:03:08.750888  333222 start.go:241] waiting for startup goroutines ...
	I0831 23:03:08.750913  333222 start.go:255] writing updated cluster config ...
	I0831 23:03:08.751279  333222 ssh_runner.go:195] Run: rm -f paused
	I0831 23:03:08.843032  333222 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 23:03:08.847811  333222 out.go:177] * Done! kubectl is now configured to use "ha-330867" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 23:01:52 ha-330867 crio[645]: time="2024-08-31 23:01:52.140994080Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1f0e20f0b304b4a063393109ce47be068cb18a60bfb5024dc13ec05785df0a9f/merged/etc/group: no such file or directory"
	Aug 31 23:01:52 ha-330867 crio[645]: time="2024-08-31 23:01:52.189247450Z" level=info msg="Created container 2ca91216b04f7294f9b2d31726269d375ccf9e6864e1757fca84d3fc0fe73c9b: kube-system/storage-provisioner/storage-provisioner" id=5d3e7d63-30de-46a1-ab0a-b8529e6797d1 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:01:52 ha-330867 crio[645]: time="2024-08-31 23:01:52.189801950Z" level=info msg="Starting container: 2ca91216b04f7294f9b2d31726269d375ccf9e6864e1757fca84d3fc0fe73c9b" id=5421a013-f3f2-4023-94a0-671a25e882a4 name=/runtime.v1.RuntimeService/StartContainer
	Aug 31 23:01:52 ha-330867 crio[645]: time="2024-08-31 23:01:52.197116789Z" level=info msg="Started container" PID=1854 containerID=2ca91216b04f7294f9b2d31726269d375ccf9e6864e1757fca84d3fc0fe73c9b description=kube-system/storage-provisioner/storage-provisioner id=5421a013-f3f2-4023-94a0-671a25e882a4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=585a9537a88b52a0a88fd4e783699cb655ace318fda35813467381c834173c06
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.833066259Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=acd0d9e1-651c-45b4-8bb4-dc992ff0d2cf name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.833359721Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=acd0d9e1-651c-45b4-8bb4-dc992ff0d2cf name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.834249727Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=5059876f-1f88-43df-af4e-a89af3640ba0 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.834464331Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=5059876f-1f88-43df-af4e-a89af3640ba0 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.835304014Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-330867/kube-controller-manager" id=6f2489ea-3269-46b6-a4ba-2e6ae6036643 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.835439669Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.973734143Z" level=info msg="Created container 63dc2314933255982d0626169f95ba999256264eeb9320b307ee7836ea807473: kube-system/kube-controller-manager-ha-330867/kube-controller-manager" id=6f2489ea-3269-46b6-a4ba-2e6ae6036643 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.974584501Z" level=info msg="Starting container: 63dc2314933255982d0626169f95ba999256264eeb9320b307ee7836ea807473" id=489e6e94-c544-4676-ae92-a26260a83d02 name=/runtime.v1.RuntimeService/StartContainer
	Aug 31 23:01:54 ha-330867 crio[645]: time="2024-08-31 23:01:54.982352565Z" level=info msg="Started container" PID=1896 containerID=63dc2314933255982d0626169f95ba999256264eeb9320b307ee7836ea807473 description=kube-system/kube-controller-manager-ha-330867/kube-controller-manager id=489e6e94-c544-4676-ae92-a26260a83d02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2b1544592a3b10da915b67262867b24bffe69bb81072b49990d394102d6b181
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.220714633Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.225444306Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.225482410Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.225505573Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.229437703Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.229479262Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.229499906Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.234872684Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.234909985Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.234927683Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.238728146Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:02:01 ha-330867 crio[645]: time="2024-08-31 23:02:01.238768860Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	63dc231493325       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   About a minute ago   Running             kube-controller-manager   4                   f2b1544592a3b       kube-controller-manager-ha-330867
	2ca91216b04f7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       2                   585a9537a88b5       storage-provisioner
	5af3c59b32257       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Running             kube-vip                  1                   88220783400a0       kube-vip-ha-330867
	ac8e4f8f13cee       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   About a minute ago   Running             kube-apiserver            2                   4d9eabf894892       kube-apiserver-ha-330867
	95feef8d67837       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   2 minutes ago        Running             coredns                   1                   789ae8819d0df       coredns-6f6b679f8f-d67w5
	b58abc4989cad       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       1                   585a9537a88b5       storage-provisioner
	8f1bf8b1596cb       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   2 minutes ago        Running             kindnet-cni               1                   5d6e1de1aa6a4       kindnet-bfwhw
	f8b3dbbdfef2f       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   1                   d61f58ca6fb2f       busybox-7dff88458-j8jjz
	1b449db88ab81       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   2 minutes ago        Running             coredns                   1                   4ec3ecf5a2182       coredns-6f6b679f8f-drznk
	1c3513e74ca3f       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89   2 minutes ago        Running             kube-proxy                1                   63d612c9ac94c       kube-proxy-fzpmn
	433e9e1debd50       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   2 minutes ago        Exited              kube-controller-manager   3                   f2b1544592a3b       kube-controller-manager-ha-330867
	141e1929720c5       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   2 minutes ago        Exited              kube-apiserver            1                   4d9eabf894892       kube-apiserver-ha-330867
	a1dd00064cf31       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   2 minutes ago        Exited              kube-vip                  0                   88220783400a0       kube-vip-ha-330867
	8a7f51a7f6d28       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   2 minutes ago        Running             etcd                      1                   c94ccc42d21ea       etcd-ha-330867
	25d333081f684       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb   2 minutes ago        Running             kube-scheduler            1                   4ed886da5fa7f       kube-scheduler-ha-330867
	
	
	==> coredns [1b449db88ab812f71237626192da77ba1ff7b03cb67304cc64007721b6f9b838] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60423 - 7107 "HINFO IN 5179390581228408752.1018924419952430783. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026596734s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1467597596]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.546) (total time: 30001ms):
	Trace[1467597596]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:01:50.547)
	Trace[1467597596]: [30.001200296s] [30.001200296s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[830370397]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.577) (total time: 30005ms):
	Trace[830370397]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (23:01:50.582)
	Trace[830370397]: [30.005865518s] [30.005865518s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1165271430]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.547) (total time: 30035ms):
	Trace[1165271430]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30035ms (23:01:50.583)
	Trace[1165271430]: [30.035585526s] [30.035585526s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [95feef8d678372eedc69b16c94d1dce794e412fbd6e75f2807e263b55190e624] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45635 - 49544 "HINFO IN 7565603469126477506.9119011540505183531. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023316974s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1184183569]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.906) (total time: 30000ms):
	Trace[1184183569]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:01:50.907)
	Trace[1184183569]: [30.000807168s] [30.000807168s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[26761357]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.907) (total time: 30000ms):
	Trace[26761357]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:01:50.907)
	Trace[26761357]: [30.000301874s] [30.000301874s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1843221989]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:01:20.906) (total time: 30004ms):
	Trace[1843221989]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (23:01:50.910)
	Trace[1843221989]: [30.004547825s] [30.004547825s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-330867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_55_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:55:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:03:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 23:01:05 +0000   Sat, 31 Aug 2024 23:02:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 23:01:05 +0000   Sat, 31 Aug 2024 23:02:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 23:01:05 +0000   Sat, 31 Aug 2024 23:02:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 23:01:05 +0000   Sat, 31 Aug 2024 23:02:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-330867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb3c88e7e9a447f9ace40a13d251acf4
	  System UUID:                b6ed5d7f-ba7e-4438-84cd-2adf679138fc
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8jjz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 coredns-6f6b679f8f-d67w5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m56s
	  kube-system                 coredns-6f6b679f8f-drznk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m56s
	  kube-system                 etcd-ha-330867                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m1s
	  kube-system                 kindnet-bfwhw                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m57s
	  kube-system                 kube-apiserver-ha-330867             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-controller-manager-ha-330867    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-proxy-fzpmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-scheduler-ha-330867             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-vip-ha-330867                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 7m55s                  kube-proxy       
	  Normal   NodeHasSufficientPID     8m1s                   kubelet          Node ha-330867 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m1s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m1s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m1s                   kubelet          Node ha-330867 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m1s                   kubelet          Node ha-330867 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m58s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeReady                7m42s                  kubelet          Node ha-330867 status is now: NodeReady
	  Normal   RegisteredNode           7m25s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           3m39s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   Starting                 2m56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m56s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m56s (x8 over 2m56s)  kubelet          Node ha-330867 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s (x8 over 2m56s)  kubelet          Node ha-330867 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m56s (x7 over 2m56s)  kubelet          Node ha-330867 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m18s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeNotReady             63s                    node-controller  Node ha-330867 status is now: NodeNotReady
	  Normal   RegisteredNode           59s                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	
	
	Name:               ha-330867-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_55_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:55:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:03:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:01:00 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:01:00 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:01:00 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:01:00 +0000   Sat, 31 Aug 2024 22:56:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-330867-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 744186059ee04837b23a04bca8e8a630
	  System UUID:                e586dd77-2a99-41e1-8b8e-c5e85c8b470c
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kj4qn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 etcd-ha-330867-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m34s
	  kube-system                 kindnet-bdzqv                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m35s
	  kube-system                 kube-apiserver-ha-330867-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m34s
	  kube-system                 kube-controller-manager-ha-330867-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m29s
	  kube-system                 kube-proxy-72g7x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kube-system                 kube-scheduler-ha-330867-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m25s
	  kube-system                 kube-vip-ha-330867-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m1s                   kube-proxy       
	  Normal   Starting                 3m44s                  kube-proxy       
	  Normal   Starting                 7m28s                  kube-proxy       
	  Normal   NodeHasSufficientPID     7m35s (x7 over 7m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m35s (x8 over 7m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m35s (x8 over 7m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           7m33s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           7m25s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   NodeHasSufficientPID     4m9s (x7 over 4m9s)    kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m9s (x8 over 4m9s)    kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m9s (x8 over 4m9s)    kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           3m39s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   NodeHasSufficientMemory  2m54s (x8 over 2m54s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m54s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m54s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    2m54s (x8 over 2m54s)  kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m54s (x7 over 2m54s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m18s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           84s                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           59s                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	
	
	Name:               ha-330867-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_58_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:02:51 +0000   Sat, 31 Aug 2024 23:02:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:02:51 +0000   Sat, 31 Aug 2024 23:02:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:02:51 +0000   Sat, 31 Aug 2024 23:02:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:02:51 +0000   Sat, 31 Aug 2024 23:02:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-330867-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb50bca531664dcf9d655f9c7f959812
	  System UUID:                9496d7a6-36cc-4e01-9c23-720cff5b6faa
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2r2dv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 kindnet-fnccr              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m7s
	  kube-system                 kube-proxy-5n584           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m4s                 kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Warning  CgroupV1                 5m7s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m7s (x2 over 5m7s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m7s (x2 over 5m7s)  kubelet          Node ha-330867-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m7s (x2 over 5m7s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s                 node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           5m4s                 node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           5m2s                 node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeReady                4m51s                kubelet          Node ha-330867-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m39s                node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           2m18s                node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeNotReady             98s                  node-controller  Node ha-330867-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           84s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           59s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   Starting                 45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 45s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     38s (x7 over 45s)    kubelet          Node ha-330867-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  32s (x8 over 45s)    kubelet          Node ha-330867-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s (x8 over 45s)    kubelet          Node ha-330867-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Aug31 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014722] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471263] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.854339] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621095] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 21:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug31 21:36] hrtimer: interrupt took 85633258 ns
	[Aug31 22:54] FS-Cache: Duplicate cookie detected
	[  +0.013283] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000cc966b56{9P.session} n=000000008b7f54ff
	[  +0.001103] FS-Cache: O-key=[10] '34323937323438343432'
	[  +0.000787] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000cc966b56{9P.session} n=0000000065232866
	[  +0.001123] FS-Cache: N-key=[10] '34323937323438343432'
	
	
	==> etcd [8a7f51a7f6d28d594e80f2f5e5e556b98855358fdf219d8aae66d7a8e6447284] <==
	{"level":"info","ts":"2024-08-31T23:02:17.906401Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:02:17.913856Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:02:17.966173Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"2d2d30b12f2d08df","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-31T23:02:17.966218Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:02:17.980334Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"2d2d30b12f2d08df","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-31T23:02:17.980377Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.500309Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:60246","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-31T23:03:13.550249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(8665882295169845764 12593026477526642892)"}
	{"level":"info","ts":"2024-08-31T23:03:13.552518Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"2d2d30b12f2d08df","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-08-31T23:03:13.552619Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.552862Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.552929Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.553194Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.553259Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.553323Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.553570Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df","error":"context canceled"}
	{"level":"warn","ts":"2024-08-31T23:03:13.553643Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2d2d30b12f2d08df","error":"failed to read 2d2d30b12f2d08df on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-31T23:03:13.553685Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.553846Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df","error":"context canceled"}
	{"level":"info","ts":"2024-08-31T23:03:13.553910Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.553955Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.554010Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"2d2d30b12f2d08df"}
	{"level":"info","ts":"2024-08-31T23:03:13.554055Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.585555Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"2d2d30b12f2d08df"}
	{"level":"warn","ts":"2024-08-31T23:03:13.596847Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:60668","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:03:23 up  2:45,  0 users,  load average: 5.07, 3.32, 2.19
	Linux ha-330867 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8f1bf8b1596cb2883ba751b0b9ab069d42d84e105c23a0b2f1c6b01fe8951ca7] <==
	I0831 23:02:51.229193       1 main.go:322] Node ha-330867-m03 has CIDR [10.244.2.0/24] 
	I0831 23:02:51.229234       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:02:51.229292       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:03:01.222800       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:03:01.222861       1 main.go:299] handling current node
	I0831 23:03:01.222878       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:03:01.222895       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	I0831 23:03:01.223006       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0831 23:03:01.223018       1 main.go:322] Node ha-330867-m03 has CIDR [10.244.2.0/24] 
	I0831 23:03:01.223056       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:03:01.223061       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:03:11.219665       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:03:11.219705       1 main.go:299] handling current node
	I0831 23:03:11.219725       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:03:11.219732       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	I0831 23:03:11.219832       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0831 23:03:11.219848       1 main.go:322] Node ha-330867-m03 has CIDR [10.244.2.0/24] 
	I0831 23:03:11.219887       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:03:11.219893       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:03:21.219842       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:03:21.219978       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:03:21.220151       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:03:21.220218       1 main.go:299] handling current node
	I0831 23:03:21.220256       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:03:21.220305       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [141e1929720c5a98d8f677374779f1af348adc9610aac7b5d4ff4c2fb98f9a4b] <==
	W0831 23:00:59.400646       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed
	E0831 23:00:59.400680       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed; reinitializing...
	I0831 23:00:59.590193       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:00:59.590834       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:00:59.590854       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:00:59.591049       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:00:59.591069       1 policy_source.go:224] refreshing policies
	I0831 23:00:59.592827       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:00:59.592886       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:00:59.599846       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:00:59.602347       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:00:59.604025       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:00:59.605257       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:00:59.626348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0831 23:00:59.629979       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:00:59.630082       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:00:59.630097       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:00:59.630103       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:00:59.630114       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:00:59.661434       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0831 23:00:59.676633       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0831 23:00:59.678760       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:00:59.704402       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0831 23:00:59.718819       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0831 23:01:41.387445       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [ac8e4f8f13cee824acd8865f09198df1d3497939c78f23271207dd73e6c8eb71] <==
	I0831 23:01:45.873257       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0831 23:01:45.873367       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0831 23:01:45.898692       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:01:45.898905       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:01:46.122697       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:01:46.125742       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:01:46.125837       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:01:46.127517       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:01:46.127730       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:01:46.127770       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:01:46.127801       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:01:46.132281       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:01:46.159596       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:01:46.161245       1 policy_source.go:224] refreshing policies
	I0831 23:01:46.175721       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:01:46.177134       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:01:46.198719       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:01:46.219749       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:01:46.227571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:01:46.227924       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:01:46.229925       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:01:46.733463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 23:01:47.464744       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0831 23:01:47.469403       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:01:47.483502       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [433e9e1debd5075b55db4657a10f00dbbb797b121943b1488f363648af01af15] <==
	I0831 23:01:23.334365       1 serving.go:386] Generated self-signed cert in-memory
	I0831 23:01:24.693558       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 23:01:24.693591       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:01:24.695047       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:01:24.695199       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:01:24.695326       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 23:01:24.695414       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 23:01:34.711950       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [63dc2314933255982d0626169f95ba999256264eeb9320b307ee7836ea807473] <==
	I0831 23:02:37.946155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.441003ms"
	I0831 23:02:37.946261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.013µs"
	I0831 23:02:51.601104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:02:51.606488       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-330867-m04"
	I0831 23:02:51.627215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:02:54.491657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:03:09.725459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m03"
	I0831 23:03:09.750489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m03"
	I0831 23:03:09.937373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="128.261914ms"
	I0831 23:03:10.175543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="236.708649ms"
	E0831 23:03:10.175705       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-7dff88458\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7dff88458\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0831 23:03:10.178307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.218µs"
	I0831 23:03:10.184401       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="100.537µs"
	I0831 23:03:11.983169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.474µs"
	I0831 23:03:12.515911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.749µs"
	I0831 23:03:12.525273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.341µs"
	I0831 23:03:13.867959       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.94723ms"
	I0831 23:03:13.868348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.092µs"
	I0831 23:03:16.558029       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-330867-m04"
	I0831 23:03:16.558813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m03"
	E0831 23:03:19.148610       1 gc_controller.go:151] "Failed to get node" err="node \"ha-330867-m03\" not found" logger="pod-garbage-collector-controller" node="ha-330867-m03"
	E0831 23:03:19.148663       1 gc_controller.go:151] "Failed to get node" err="node \"ha-330867-m03\" not found" logger="pod-garbage-collector-controller" node="ha-330867-m03"
	E0831 23:03:19.148671       1 gc_controller.go:151] "Failed to get node" err="node \"ha-330867-m03\" not found" logger="pod-garbage-collector-controller" node="ha-330867-m03"
	E0831 23:03:19.148681       1 gc_controller.go:151] "Failed to get node" err="node \"ha-330867-m03\" not found" logger="pod-garbage-collector-controller" node="ha-330867-m03"
	E0831 23:03:19.148688       1 gc_controller.go:151] "Failed to get node" err="node \"ha-330867-m03\" not found" logger="pod-garbage-collector-controller" node="ha-330867-m03"
	
	
	==> kube-proxy [1c3513e74ca3f5be6210743e119365b7caa1e30f6d00690463def5c083ddae6b] <==
	I0831 23:01:21.027212       1 server_linux.go:66] "Using iptables proxy"
	I0831 23:01:21.428570       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 23:01:21.428732       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:01:21.458912       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 23:01:21.459033       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:01:21.461180       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:01:21.461736       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:01:21.461797       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:01:21.469586       1 config.go:197] "Starting service config controller"
	I0831 23:01:21.470923       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:01:21.471037       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:01:21.471086       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:01:21.473839       1 config.go:326] "Starting node config controller"
	I0831 23:01:21.483114       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:01:21.483209       1 shared_informer.go:320] Caches are synced for node config
	I0831 23:01:21.571745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 23:01:21.571880       1 shared_informer.go:320] Caches are synced for service config
	W0831 23:02:31.165230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1911": http2: client connection lost
	E0831 23:02:31.165507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1911\": http2: client connection lost" logger="UnhandledError"
	W0831 23:02:31.165599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-330867&resourceVersion=1907": http2: client connection lost
	E0831 23:02:31.165639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-330867&resourceVersion=1907\": http2: client connection lost" logger="UnhandledError"
	W0831 23:02:31.165720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1871": http2: client connection lost
	E0831 23:02:31.165753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1871\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [25d333081f684b9295fe4bb42052417494e5591d914f90e62f3e9172e343f9ed] <==
	E0831 23:00:56.879214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:57.209432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 23:00:57.209544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:57.453348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 23:00:57.453390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:00:59.101910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 23:00:59.102036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 23:01:00.018211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 23:01:46.118372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:44866->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:44880->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:44922->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:44908->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:44968->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:44984->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:44906->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:44956->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:44898->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:44948->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:44894->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:44936->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:44878->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:01:46.118987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:44926->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:03:09.901823       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2r2dv\": pod busybox-7dff88458-2r2dv is already assigned to node \"ha-330867-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2r2dv" node="ha-330867-m04"
	E0831 23:03:09.901992       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2r2dv\": pod busybox-7dff88458-2r2dv is already assigned to node \"ha-330867-m04\"" pod="default/busybox-7dff88458-2r2dv"
	E0831 23:03:09.991168       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-cfcnz is already present in the active queue" pod="default/busybox-7dff88458-cfcnz"
	
	
	==> kubelet <==
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125592     762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-330867&resourceVersion=1839\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.125647     762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1838": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125676     762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1838\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.125722     762 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1838": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125750     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1838\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.125807     762 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-330867&resourceVersion=1966": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125835     762 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-330867&resourceVersion=1966\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.125878     762 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1838": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125913     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1838\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.125966     762 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1838": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.125993     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=1838\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.126044     762 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-330867.17f0f27350a5ff8f\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-330867.17f0f27350a5ff8f  kube-system   1769 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-330867,UID:cf3ee01affeaae0e79f11b427bc3732c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-330867,},FirstTimestamp:2024-08-31 23:00:34 +0000 UTC,LastTimestamp:2024-08-31 23:01:42.090663478 +0000 UTC m=+74.441514071,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-330867,}"
	Aug 31 23:02:31 ha-330867 kubelet[762]: W0831 23:02:31.126188     762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1825": http2: client connection lost
	Aug 31 23:02:31 ha-330867 kubelet[762]: E0831 23:02:31.126420     762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1825\": http2: client connection lost" logger="UnhandledError"
	Aug 31 23:02:31 ha-330867 kubelet[762]: I0831 23:02:31.126530     762 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-apiserver-ha-330867" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdbb0015-8158-49f3-a4fb-02a878e653da\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e4f8f13cee824acd8865f09198df1d3497939c78f23271207dd73e6c8eb71\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.31.0\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://141e1929720c5a98d8f677374779f1af348adc9610aac7b5d4ff4c2fb98f9a4b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2024-08-31T23:01:41Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2024-08-31T23:00:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\
\"running\\\":{\\\"startedAt\\\":\\\"2024-08-31T23:01:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/ca-certificates\\\",\\\"name\\\":\\\"etc-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/minikube/certs\\\",\\\"name\\\":\\\"k8s-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/usr/local/share/ca-certificates\\\",\\\"name\\\":\\\"usr-local-share-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/usr/share/ca-certificates\\\",\\\"name\\\":\\\"usr-share-ca-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"kube-system\"/\"kube-apiserver-ha-330867\": Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/ku
be-apiserver-ha-330867/status\": http2: client connection lost"
	Aug 31 23:02:37 ha-330867 kubelet[762]: E0831 23:02:37.903221     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145357903023629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:02:37 ha-330867 kubelet[762]: E0831 23:02:37.903257     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145357903023629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:02:47 ha-330867 kubelet[762]: E0831 23:02:47.906134     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145367905839867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:02:47 ha-330867 kubelet[762]: E0831 23:02:47.906170     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145367905839867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:02:57 ha-330867 kubelet[762]: E0831 23:02:57.908087     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145377907855321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:02:57 ha-330867 kubelet[762]: E0831 23:02:57.908122     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145377907855321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:03:07 ha-330867 kubelet[762]: E0831 23:03:07.910156     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145387909850514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:03:07 ha-330867 kubelet[762]: E0831 23:03:07.910211     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145387909850514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:03:17 ha-330867 kubelet[762]: E0831 23:03:17.911580     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145397911318896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:03:17 ha-330867 kubelet[762]: E0831 23:03:17.911616     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145397911318896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-330867 -n ha-330867
helpers_test.go:262: (dbg) Run:  kubectl --context ha-330867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (16.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (125.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-330867 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0831 23:04:22.940737  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:06:01.241454  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-330867 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m0.264734669s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-330867       NotReady   control-plane   10m     v1.31.0
	ha-330867-m02   Ready      control-plane   10m     v1.31.0
	ha-330867-m04   Ready      <none>          7m47s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect ha-330867
helpers_test.go:236: (dbg) docker inspect ha-330867:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192",
	        "Created": "2024-08-31T22:54:59.324706066Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 344369,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T23:04:02.588588052Z",
	            "FinishedAt": "2024-08-31T23:04:01.609632257Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/hostname",
	        "HostsPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/hosts",
	        "LogPath": "/var/lib/docker/containers/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192-json.log",
	        "Name": "/ha-330867",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-330867:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-330867",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b-init/diff:/var/lib/docker/overlay2/b65bd3df822a42b081e949f262147909a06a528615f1ebee5ca341285d3e7159/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50dd99986f3e611e1df49debfb3d9f49455382bd3e8a28c4563876fdc050928b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-330867",
	                "Source": "/var/lib/docker/volumes/ha-330867/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-330867",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-330867",
	                "name.minikube.sigs.k8s.io": "ha-330867",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffddd5c4d32869e621c1e4c792eb43c9748ccbb5f2a87fc66aae11fd22e2a52a",
	            "SandboxKey": "/var/run/docker/netns/ffddd5c4d328",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-330867": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "84f98643c6f36600558d56174c7006f409ffd5e61fb741f838ba34e8937fb59a",
	                    "EndpointID": "49cf01d58b838324250a4f171e0509d86bfd2dacbb0a2b395a3087c541f8500a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-330867",
	                        "db44dca62049"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-330867 -n ha-330867
helpers_test.go:245: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 logs -n 25: (2.07802314s)
helpers_test.go:253: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04:/home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m04 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-330867 cp testdata/cp-test.txt                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867:/home/docker/cp-test_ha-330867-m04_ha-330867.txt                       |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867 sudo cat                                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867.txt                                 |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m02:/home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m02 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m03:/home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n                                                                 | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | ha-330867-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-330867 ssh -n ha-330867-m03 sudo cat                                          | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:58 UTC |
	|         | /home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-330867 node stop m02 -v=7                                                     | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:58 UTC | 31 Aug 24 22:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-330867 node start m02 -v=7                                                    | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC | 31 Aug 24 22:59 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-330867 -v=7                                                           | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-330867 -v=7                                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 22:59 UTC | 31 Aug 24 23:00 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-330867 --wait=true -v=7                                                    | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:00 UTC | 31 Aug 24 23:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-330867                                                                | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC |                     |
	| node    | ha-330867 node delete m03 -v=7                                                   | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:03 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-330867 stop -v=7                                                              | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:03 UTC | 31 Aug 24 23:04 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-330867 --wait=true                                                         | ha-330867 | jenkins | v1.33.1 | 31 Aug 24 23:04 UTC | 31 Aug 24 23:06 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 23:04:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 23:04:02.047826  344166 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:04:02.048026  344166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:04:02.048038  344166 out.go:358] Setting ErrFile to fd 2...
	I0831 23:04:02.048044  344166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:04:02.048314  344166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:04:02.048768  344166 out.go:352] Setting JSON to false
	I0831 23:04:02.049804  344166 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9990,"bootTime":1725135452,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 23:04:02.049888  344166 start.go:139] virtualization:  
	I0831 23:04:02.053224  344166 out.go:177] * [ha-330867] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:04:02.056637  344166 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:04:02.056740  344166 notify.go:220] Checking for updates...
	I0831 23:04:02.062336  344166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:04:02.065262  344166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:04:02.068165  344166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 23:04:02.071036  344166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:04:02.073992  344166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:04:02.077363  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:02.078069  344166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:04:02.112492  344166 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:04:02.112617  344166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:04:02.165962  344166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-31 23:04:02.15619979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:04:02.166085  344166 docker.go:307] overlay module found
	I0831 23:04:02.169214  344166 out.go:177] * Using the docker driver based on existing profile
	I0831 23:04:02.171913  344166 start.go:297] selected driver: docker
	I0831 23:04:02.171940  344166 start.go:901] validating driver "docker" against &{Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:04:02.172079  344166 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:04:02.172225  344166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:04:02.232536  344166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-31 23:04:02.223067308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:04:02.232991  344166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:04:02.233038  344166 cni.go:84] Creating CNI manager for ""
	I0831 23:04:02.233054  344166 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:04:02.233109  344166 start.go:340] cluster config:
	{Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0831 23:04:02.236437  344166 out.go:177] * Starting "ha-330867" primary control-plane node in "ha-330867" cluster
	I0831 23:04:02.239585  344166 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:04:02.242773  344166 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:04:02.246336  344166 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:04:02.246399  344166 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 23:04:02.246413  344166 cache.go:56] Caching tarball of preloaded images
	I0831 23:04:02.246447  344166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:04:02.246496  344166 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:04:02.246507  344166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:04:02.246663  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:04:02.266710  344166 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:04:02.266734  344166 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:04:02.266812  344166 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:04:02.266835  344166 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:04:02.266839  344166 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:04:02.266847  344166 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:04:02.266853  344166 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:04:02.268227  344166 image.go:273] response: 
	I0831 23:04:02.441953  344166 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:04:02.441997  344166 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:04:02.442043  344166 start.go:360] acquireMachinesLock for ha-330867: {Name:mk05480d63e8159586921c755402190e3148136c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:04:02.442124  344166 start.go:364] duration metric: took 52.603µs to acquireMachinesLock for "ha-330867"
	I0831 23:04:02.442173  344166 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:04:02.442184  344166 fix.go:54] fixHost starting: 
	I0831 23:04:02.442474  344166 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:04:02.458873  344166 fix.go:112] recreateIfNeeded on ha-330867: state=Stopped err=<nil>
	W0831 23:04:02.458901  344166 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:04:02.462333  344166 out.go:177] * Restarting existing docker container for "ha-330867" ...
	I0831 23:04:02.465890  344166 cli_runner.go:164] Run: docker start ha-330867
	I0831 23:04:02.775700  344166 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:04:02.807230  344166 kic.go:435] container "ha-330867" state is running.
	I0831 23:04:02.807652  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:04:02.832285  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:04:02.832762  344166 machine.go:93] provisionDockerMachine start ...
	I0831 23:04:02.832862  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:02.852561  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:02.852832  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0831 23:04:02.852842  344166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:04:02.853484  344166 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57418->127.0.0.1:33193: read: connection reset by peer
	I0831 23:04:05.987723  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867
	
	I0831 23:04:05.987748  344166 ubuntu.go:169] provisioning hostname "ha-330867"
	I0831 23:04:05.987822  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:06.004754  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:06.005027  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0831 23:04:06.005045  344166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867 && echo "ha-330867" | sudo tee /etc/hostname
	I0831 23:04:06.165931  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867
	
	I0831 23:04:06.166036  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:06.185005  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:06.185269  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0831 23:04:06.185291  344166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:04:06.316275  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:04:06.316302  344166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:04:06.316320  344166 ubuntu.go:177] setting up certificates
	I0831 23:04:06.316343  344166 provision.go:84] configureAuth start
	I0831 23:04:06.316429  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:04:06.333321  344166 provision.go:143] copyHostCerts
	I0831 23:04:06.333368  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:04:06.333402  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:04:06.333413  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:04:06.333495  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:04:06.333613  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:04:06.333636  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:04:06.333641  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:04:06.333673  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:04:06.333727  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:04:06.333748  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:04:06.333756  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:04:06.333783  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:04:06.333842  344166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867 san=[127.0.0.1 192.168.49.2 ha-330867 localhost minikube]
	I0831 23:04:06.700248  344166 provision.go:177] copyRemoteCerts
	I0831 23:04:06.700324  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:04:06.700366  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:06.716616  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:06.813220  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:04:06.813282  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:04:06.838923  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:04:06.838999  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0831 23:04:06.864125  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:04:06.864186  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:04:06.889261  344166 provision.go:87] duration metric: took 572.900373ms to configureAuth
	I0831 23:04:06.889289  344166 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:04:06.889535  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:06.889648  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:06.905984  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:06.906241  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0831 23:04:06.906261  344166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:04:07.373688  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:04:07.373722  344166 machine.go:96] duration metric: took 4.540938428s to provisionDockerMachine
	I0831 23:04:07.373735  344166 start.go:293] postStartSetup for "ha-330867" (driver="docker")
	I0831 23:04:07.373746  344166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:04:07.373809  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:04:07.373855  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:07.394358  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:07.493840  344166 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:04:07.497064  344166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:04:07.497101  344166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:04:07.497112  344166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:04:07.497118  344166 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:04:07.497128  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:04:07.497190  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:04:07.497271  344166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:04:07.497281  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:04:07.497386  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:04:07.506230  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:04:07.532134  344166 start.go:296] duration metric: took 158.383627ms for postStartSetup
	I0831 23:04:07.532260  344166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:04:07.532349  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:07.548842  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:07.641226  344166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:04:07.645789  344166 fix.go:56] duration metric: took 5.203597026s for fixHost
	I0831 23:04:07.645817  344166 start.go:83] releasing machines lock for "ha-330867", held for 5.203677698s
	I0831 23:04:07.645898  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 23:04:07.663287  344166 ssh_runner.go:195] Run: cat /version.json
	I0831 23:04:07.663362  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:07.663634  344166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:04:07.663700  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:07.683601  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:07.696642  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:07.916142  344166 ssh_runner.go:195] Run: systemctl --version
	I0831 23:04:07.920393  344166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:04:08.062040  344166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:04:08.066819  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:04:08.077256  344166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:04:08.077375  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:04:08.087078  344166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:04:08.087108  344166 start.go:495] detecting cgroup driver to use...
	I0831 23:04:08.087990  344166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:04:08.088057  344166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:04:08.101235  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:04:08.113252  344166 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:04:08.113359  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:04:08.126575  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:04:08.142490  344166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:04:08.223581  344166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:04:08.312437  344166 docker.go:233] disabling docker service ...
	I0831 23:04:08.312579  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:04:08.327185  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:04:08.338817  344166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:04:08.426220  344166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:04:08.506661  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:04:08.518430  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:04:08.536654  344166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:04:08.536785  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.547538  344166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:04:08.547662  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.558765  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.569333  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.580009  344166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:04:08.589748  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.599535  344166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.609205  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:08.619231  344166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:04:08.627899  344166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:04:08.636601  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:04:08.723772  344166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:04:08.860539  344166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:04:08.860610  344166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:04:08.864104  344166 start.go:563] Will wait 60s for crictl version
	I0831 23:04:08.864177  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:04:08.867680  344166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:04:08.903628  344166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:04:08.903736  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:04:08.945627  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:04:08.989767  344166 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:04:08.991559  344166 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:04:09.009419  344166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:04:09.015507  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:04:09.028795  344166 kubeadm.go:883] updating cluster {Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 23:04:09.029014  344166 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:04:09.029106  344166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:04:09.090258  344166 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:04:09.090284  344166 crio.go:433] Images already preloaded, skipping extraction
	I0831 23:04:09.090343  344166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0831 23:04:09.129734  344166 crio.go:514] all images are preloaded for cri-o runtime.
	I0831 23:04:09.129759  344166 cache_images.go:84] Images are preloaded, skipping loading
	I0831 23:04:09.129770  344166 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0831 23:04:09.129915  344166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:04:09.130011  344166 ssh_runner.go:195] Run: crio config
	I0831 23:04:09.179001  344166 cni.go:84] Creating CNI manager for ""
	I0831 23:04:09.179026  344166 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0831 23:04:09.179037  344166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 23:04:09.179084  344166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-330867 NodeName:ha-330867 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 23:04:09.179273  344166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-330867"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 23:04:09.179295  344166 kube-vip.go:115] generating kube-vip config ...
	I0831 23:04:09.179356  344166 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0831 23:04:09.192061  344166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 23:04:09.192178  344166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 23:04:09.192243  344166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:04:09.201769  344166 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:04:09.201843  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0831 23:04:09.210708  344166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0831 23:04:09.228640  344166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:04:09.246037  344166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0831 23:04:09.265097  344166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 23:04:09.283394  344166 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:04:09.286807  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:04:09.298235  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:04:09.388470  344166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:04:09.402678  344166 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.2
	I0831 23:04:09.402702  344166 certs.go:194] generating shared ca certs ...
	I0831 23:04:09.402719  344166 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:09.402861  344166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:04:09.402916  344166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:04:09.402928  344166 certs.go:256] generating profile certs ...
	I0831 23:04:09.403007  344166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key
	I0831 23:04:09.403040  344166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.ef27e06c
	I0831 23:04:09.403059  344166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.ef27e06c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0831 23:04:10.061007  344166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.ef27e06c ...
	I0831 23:04:10.061048  344166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.ef27e06c: {Name:mke465a4f5a401caee2ec5263177cb70eeded102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:10.061303  344166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.ef27e06c ...
	I0831 23:04:10.061320  344166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.ef27e06c: {Name:mkedcc1e33e86880c91984561aef2d6208eaf348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:10.061434  344166 certs.go:381] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt.ef27e06c -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt
	I0831 23:04:10.061586  344166 certs.go:385] copying /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.ef27e06c -> /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key
	I0831 23:04:10.061744  344166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key
	I0831 23:04:10.061766  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:04:10.061784  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:04:10.061805  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:04:10.061823  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:04:10.061841  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:04:10.061856  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:04:10.061871  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:04:10.061887  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:04:10.061944  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:04:10.061980  344166 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:04:10.061993  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:04:10.062021  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:04:10.062049  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:04:10.062075  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:04:10.062162  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:04:10.062197  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:10.062210  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:04:10.062223  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:04:10.062830  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:04:10.096105  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:04:10.125442  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:04:10.152143  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:04:10.178496  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 23:04:10.204700  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:04:10.229861  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:04:10.255073  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:04:10.280008  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:04:10.305466  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:04:10.330306  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:04:10.354671  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 23:04:10.372769  344166 ssh_runner.go:195] Run: openssl version
	I0831 23:04:10.380544  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:04:10.390897  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:10.394405  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:10.394477  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:10.401204  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:04:10.409926  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:04:10.419406  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:04:10.422912  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:04:10.422980  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:04:10.429998  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:04:10.439135  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:04:10.448452  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:04:10.452026  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:04:10.452107  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:04:10.459452  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:04:10.468475  344166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:04:10.472038  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:04:10.478776  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:04:10.486587  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:04:10.494854  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:04:10.501845  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:04:10.508790  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:04:10.517134  344166 kubeadm.go:392] StartCluster: {Name:ha-330867 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 23:04:10.517276  344166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0831 23:04:10.517335  344166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0831 23:04:10.556649  344166 cri.go:89] found id: ""
	I0831 23:04:10.556774  344166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 23:04:10.566283  344166 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0831 23:04:10.566305  344166 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0831 23:04:10.566379  344166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0831 23:04:10.575243  344166 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0831 23:04:10.575682  344166 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-330867" does not appear in /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:04:10.575793  344166 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-277799/kubeconfig needs updating (will repair): [kubeconfig missing "ha-330867" cluster setting kubeconfig missing "ha-330867" context setting]
	I0831 23:04:10.576040  344166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:10.576462  344166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:04:10.576711  344166 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0831 23:04:10.577372  344166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0831 23:04:10.577450  344166 cert_rotation.go:140] Starting client certificate rotation controller
	I0831 23:04:10.586389  344166 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0831 23:04:10.586411  344166 kubeadm.go:597] duration metric: took 20.10001ms to restartPrimaryControlPlane
	I0831 23:04:10.586420  344166 kubeadm.go:394] duration metric: took 69.295806ms to StartCluster
	I0831 23:04:10.586436  344166 settings.go:142] acquiring lock: {Name:mkadbc7d53c5858a38d57ec152e52037ebee242b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:10.586498  344166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:04:10.587130  344166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-277799/kubeconfig: {Name:mk030275545fba839e6cc35acffc3f7a124ed10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:10.587326  344166 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:04:10.587354  344166 start.go:241] waiting for startup goroutines ...
	I0831 23:04:10.587369  344166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0831 23:04:10.587891  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:10.594327  344166 out.go:177] * Enabled addons: 
	I0831 23:04:10.598819  344166 addons.go:510] duration metric: took 11.447921ms for enable addons: enabled=[]
	I0831 23:04:10.598863  344166 start.go:246] waiting for cluster config update ...
	I0831 23:04:10.598873  344166 start.go:255] writing updated cluster config ...
	I0831 23:04:10.604080  344166 out.go:201] 
	I0831 23:04:10.608267  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:10.608383  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:04:10.612667  344166 out.go:177] * Starting "ha-330867-m02" control-plane node in "ha-330867" cluster
	I0831 23:04:10.616451  344166 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:04:10.620330  344166 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:04:10.622789  344166 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:04:10.622815  344166 cache.go:56] Caching tarball of preloaded images
	I0831 23:04:10.622847  344166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:04:10.622920  344166 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:04:10.622937  344166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:04:10.623063  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:04:10.641650  344166 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:04:10.641671  344166 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:04:10.641746  344166 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:04:10.641767  344166 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:04:10.641802  344166 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:04:10.641819  344166 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:04:10.641825  344166 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:04:10.643014  344166 image.go:273] response: 
	I0831 23:04:10.809052  344166 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:04:10.809093  344166 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:04:10.809125  344166 start.go:360] acquireMachinesLock for ha-330867-m02: {Name:mk1b868483094d3fb1d98465dcb37de63a18b6cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:04:10.809263  344166 start.go:364] duration metric: took 119.959µs to acquireMachinesLock for "ha-330867-m02"
	I0831 23:04:10.809292  344166 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:04:10.809298  344166 fix.go:54] fixHost starting: m02
	I0831 23:04:10.809579  344166 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 23:04:10.825679  344166 fix.go:112] recreateIfNeeded on ha-330867-m02: state=Stopped err=<nil>
	W0831 23:04:10.825716  344166 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:04:10.827670  344166 out.go:177] * Restarting existing docker container for "ha-330867-m02" ...
	I0831 23:04:10.831331  344166 cli_runner.go:164] Run: docker start ha-330867-m02
	I0831 23:04:11.136238  344166 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 23:04:11.159650  344166 kic.go:435] container "ha-330867-m02" state is running.
	I0831 23:04:11.160051  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:04:11.183895  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:04:11.184135  344166 machine.go:93] provisionDockerMachine start ...
	I0831 23:04:11.184195  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:11.206345  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:11.206578  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0831 23:04:11.206587  344166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:04:11.207274  344166 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 23:04:14.390833  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m02
	
	I0831 23:04:14.390859  344166 ubuntu.go:169] provisioning hostname "ha-330867-m02"
	I0831 23:04:14.390937  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:14.421252  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:14.421493  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0831 23:04:14.421510  344166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867-m02 && echo "ha-330867-m02" | sudo tee /etc/hostname
	I0831 23:04:14.637743  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m02
	
	I0831 23:04:14.637851  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:14.672626  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:14.672870  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0831 23:04:14.672894  344166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:04:14.861815  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:04:14.861853  344166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:04:14.861869  344166 ubuntu.go:177] setting up certificates
	I0831 23:04:14.861886  344166 provision.go:84] configureAuth start
	I0831 23:04:14.861954  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:04:14.883468  344166 provision.go:143] copyHostCerts
	I0831 23:04:14.883512  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:04:14.883548  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:04:14.883554  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:04:14.883641  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:04:14.883721  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:04:14.883742  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:04:14.883747  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:04:14.883773  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:04:14.883831  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:04:14.883849  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:04:14.883854  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:04:14.883887  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:04:14.883975  344166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867-m02 san=[127.0.0.1 192.168.49.3 ha-330867-m02 localhost minikube]
	I0831 23:04:15.613522  344166 provision.go:177] copyRemoteCerts
	I0831 23:04:15.613634  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:04:15.613709  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:15.631815  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:04:15.766824  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:04:15.766943  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:04:15.792638  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:04:15.792754  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 23:04:15.818258  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:04:15.818368  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 23:04:15.854062  344166 provision.go:87] duration metric: took 992.161444ms to configureAuth
	I0831 23:04:15.854139  344166 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:04:15.854463  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:15.854621  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:15.886034  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:04:15.886323  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0831 23:04:15.886339  344166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:04:16.492160  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:04:16.492187  344166 machine.go:96] duration metric: took 5.308041923s to provisionDockerMachine
	I0831 23:04:16.492199  344166 start.go:293] postStartSetup for "ha-330867-m02" (driver="docker")
	I0831 23:04:16.492210  344166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:04:16.492268  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:04:16.492308  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:16.516622  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:04:16.623140  344166 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:04:16.635084  344166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:04:16.635120  344166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:04:16.635133  344166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:04:16.635140  344166 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:04:16.635150  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:04:16.635214  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:04:16.635293  344166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:04:16.635304  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:04:16.635406  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:04:16.646148  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:04:16.688613  344166 start.go:296] duration metric: took 196.399226ms for postStartSetup
	I0831 23:04:16.688717  344166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:04:16.688763  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:16.713960  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:04:16.812817  344166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:04:16.818026  344166 fix.go:56] duration metric: took 6.008721128s for fixHost
	I0831 23:04:16.818047  344166 start.go:83] releasing machines lock for "ha-330867-m02", held for 6.008773682s
	I0831 23:04:16.818118  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m02
	I0831 23:04:16.840779  344166 out.go:177] * Found network options:
	I0831 23:04:16.842140  344166 out.go:177]   - NO_PROXY=192.168.49.2
	W0831 23:04:16.843219  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:04:16.843256  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 23:04:16.843321  344166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:04:16.843366  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:16.843597  344166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:04:16.843648  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m02
	I0831 23:04:16.878268  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:04:16.880659  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m02/id_rsa Username:docker}
	I0831 23:04:17.298173  344166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:04:17.344480  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:04:17.451057  344166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:04:17.451202  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:04:17.502423  344166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:04:17.502496  344166 start.go:495] detecting cgroup driver to use...
	I0831 23:04:17.502546  344166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:04:17.502624  344166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:04:17.554311  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:04:17.597183  344166 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:04:17.597319  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:04:17.621562  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:04:17.643315  344166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:04:17.990014  344166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:04:18.315399  344166 docker.go:233] disabling docker service ...
	I0831 23:04:18.315465  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:04:18.361066  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:04:18.394205  344166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:04:18.717893  344166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:04:19.011408  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:04:19.070267  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:04:19.151798  344166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:04:19.151918  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.180215  344166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:04:19.180333  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.213531  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.259652  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.309048  344166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:04:19.326695  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.373023  344166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.403686  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:04:19.418589  344166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:04:19.454758  344166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:04:19.479964  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:04:19.752986  344166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:04:20.427764  344166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:04:20.427887  344166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:04:20.431918  344166 start.go:563] Will wait 60s for crictl version
	I0831 23:04:20.432029  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:04:20.439902  344166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:04:20.531833  344166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:04:20.531994  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:04:20.611322  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:04:20.697178  344166 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:04:20.698490  344166 out.go:177]   - env NO_PROXY=192.168.49.2
	I0831 23:04:20.699809  344166 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:04:20.735812  344166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:04:20.739802  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:04:20.762782  344166 mustload.go:65] Loading cluster: ha-330867
	I0831 23:04:20.763022  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:20.763297  344166 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:04:20.793992  344166 host.go:66] Checking if "ha-330867" exists ...
	I0831 23:04:20.794289  344166 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.3
	I0831 23:04:20.794298  344166 certs.go:194] generating shared ca certs ...
	I0831 23:04:20.794313  344166 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:04:20.794421  344166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:04:20.794461  344166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:04:20.794467  344166 certs.go:256] generating profile certs ...
	I0831 23:04:20.794537  344166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key
	I0831 23:04:20.794597  344166 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key.934ebd09
	I0831 23:04:20.794634  344166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key
	I0831 23:04:20.794643  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:04:20.794655  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:04:20.794666  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:04:20.794676  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:04:20.794686  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0831 23:04:20.794698  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0831 23:04:20.794710  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0831 23:04:20.794719  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0831 23:04:20.794769  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:04:20.794796  344166 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:04:20.794805  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:04:20.794830  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:04:20.794876  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:04:20.794899  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:04:20.794943  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:04:20.794971  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:20.794986  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:04:20.794998  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:04:20.795056  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 23:04:20.826476  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 23:04:20.944761  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0831 23:04:20.956022  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0831 23:04:20.979269  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0831 23:04:20.993523  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0831 23:04:21.025988  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0831 23:04:21.038861  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0831 23:04:21.066494  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0831 23:04:21.077773  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0831 23:04:21.113953  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0831 23:04:21.124708  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0831 23:04:21.159347  344166 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0831 23:04:21.166900  344166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0831 23:04:21.195262  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:04:21.269032  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:04:21.345708  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:04:21.414285  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:04:21.446143  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0831 23:04:21.473200  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0831 23:04:21.500468  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 23:04:21.532641  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 23:04:21.560638  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:04:21.587895  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:04:21.614880  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:04:21.646280  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0831 23:04:21.666875  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0831 23:04:21.687676  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0831 23:04:21.707668  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0831 23:04:21.728854  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0831 23:04:21.753503  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0831 23:04:21.774089  344166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0831 23:04:21.797670  344166 ssh_runner.go:195] Run: openssl version
	I0831 23:04:21.803895  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:04:21.814727  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:04:21.818749  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:04:21.818865  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:04:21.826382  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:04:21.836283  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:04:21.850524  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:21.854703  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:21.854815  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:04:21.862214  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:04:21.871903  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:04:21.882236  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:04:21.886465  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:04:21.886583  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:04:21.894208  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:04:21.904220  344166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:04:21.908560  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0831 23:04:21.915953  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0831 23:04:21.924063  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0831 23:04:21.931706  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0831 23:04:21.939624  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0831 23:04:21.947249  344166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0831 23:04:21.959327  344166 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0831 23:04:21.959494  344166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:04:21.959541  344166 kube-vip.go:115] generating kube-vip config ...
	I0831 23:04:21.959615  344166 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0831 23:04:21.977877  344166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0831 23:04:21.977994  344166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0831 23:04:21.978083  344166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:04:21.988703  344166 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:04:21.988841  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0831 23:04:21.998476  344166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 23:04:22.025532  344166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:04:22.048834  344166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0831 23:04:22.094638  344166 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:04:22.099055  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:04:22.113640  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:04:22.300563  344166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:04:22.315488  344166 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0831 23:04:22.315936  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:22.317198  344166 out.go:177] * Verifying Kubernetes components...
	I0831 23:04:22.318389  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:04:22.493090  344166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:04:22.509237  344166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:04:22.509582  344166 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 23:04:22.509672  344166 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0831 23:04:22.509930  344166 node_ready.go:35] waiting up to 6m0s for node "ha-330867-m02" to be "Ready" ...
	I0831 23:04:22.510029  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:22.510067  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:22.510089  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:22.510110  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:34.054308  344166 round_trippers.go:574] Response Status: 500 Internal Server Error in 11544 milliseconds
	I0831 23:04:34.054768  344166 node_ready.go:53] error getting node "ha-330867-m02": etcdserver: request timed out
	I0831 23:04:34.054834  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:34.054840  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:34.054848  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:34.054853  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.456345  344166 round_trippers.go:574] Response Status: 200 OK in 4401 milliseconds
	I0831 23:04:38.469767  344166 node_ready.go:49] node "ha-330867-m02" has status "Ready":"True"
	I0831 23:04:38.469793  344166 node_ready.go:38] duration metric: took 15.959827025s for node "ha-330867-m02" to be "Ready" ...
	I0831 23:04:38.469804  344166 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:04:38.469841  344166 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0831 23:04:38.469851  344166 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0831 23:04:38.469911  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:04:38.469917  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.469925  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.469930  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.504085  344166 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0831 23:04:38.515857  344166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.517807  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:04:38.517865  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.517888  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.517909  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.521767  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:38.522466  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:38.522481  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.522489  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.522495  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.524751  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.525274  344166 pod_ready.go:93] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:38.525290  344166 pod_ready.go:82] duration metric: took 8.921577ms for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.525301  344166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.525363  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-drznk
	I0831 23:04:38.525368  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.525376  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.525379  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.527904  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.528749  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:38.528793  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.528814  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.528835  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.531246  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.531782  344166 pod_ready.go:93] pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:38.531821  344166 pod_ready.go:82] duration metric: took 6.512236ms for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.531847  344166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.531938  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867
	I0831 23:04:38.531963  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.531988  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.532008  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.534522  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.535262  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:38.535302  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.535322  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.535342  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.537726  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.538343  344166 pod_ready.go:93] pod "etcd-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:38.538386  344166 pod_ready.go:82] duration metric: took 6.516683ms for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.538412  344166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.538502  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m02
	I0831 23:04:38.538526  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.538545  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.538565  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.541100  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.541746  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:38.541786  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.541808  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.541829  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.544095  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.544673  344166 pod_ready.go:93] pod "etcd-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:38.544722  344166 pod_ready.go:82] duration metric: took 6.288762ms for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.544749  344166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.544838  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m03
	I0831 23:04:38.544864  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.544885  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.544904  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.547391  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:38.670441  344166 request.go:632] Waited for 122.243353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:38.670514  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:38.670526  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.670536  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.670561  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.673266  344166 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0831 23:04:38.673468  344166 pod_ready.go:98] node "ha-330867-m03" hosting pod "etcd-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:38.673489  344166 pod_ready.go:82] duration metric: took 128.719758ms for pod "etcd-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	E0831 23:04:38.673500  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m03" hosting pod "etcd-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:38.673521  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:38.870988  344166 request.go:632] Waited for 197.384363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:04:38.871142  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:04:38.871164  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:38.871174  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:38.871184  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:38.874723  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:39.070966  344166 request.go:632] Waited for 195.380394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:39.071033  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:39.071043  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:39.071055  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:39.071062  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:39.074135  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:39.074819  344166 pod_ready.go:93] pod "kube-apiserver-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:39.074846  344166 pod_ready.go:82] duration metric: took 401.312592ms for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:39.074860  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:39.270250  344166 request.go:632] Waited for 195.313851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:04:39.270367  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:04:39.270379  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:39.270389  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:39.270410  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:39.274409  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:39.470660  344166 request.go:632] Waited for 195.357232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:39.470729  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:39.470766  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:39.470781  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:39.470787  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:39.473621  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:39.474444  344166 pod_ready.go:93] pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:39.474467  344166 pod_ready.go:82] duration metric: took 399.598567ms for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:39.474479  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:39.670851  344166 request.go:632] Waited for 196.308529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:04:39.670952  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m03
	I0831 23:04:39.670972  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:39.671005  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:39.671032  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:39.673548  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:04:39.870965  344166 request.go:632] Waited for 196.367647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:39.871051  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:39.871062  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:39.871072  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:39.871077  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:39.874335  344166 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0831 23:04:39.874489  344166 pod_ready.go:98] node "ha-330867-m03" hosting pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:39.874512  344166 pod_ready.go:82] duration metric: took 400.02021ms for pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	E0831 23:04:39.874523  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m03" hosting pod "kube-apiserver-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:39.874532  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:40.070925  344166 request.go:632] Waited for 196.29646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:04:40.071059  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:04:40.071097  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:40.071125  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:40.071149  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:40.080071  344166 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:04:40.270158  344166 request.go:632] Waited for 188.263526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:40.270516  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:40.270565  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:40.270585  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:40.270594  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:40.281788  344166 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 23:04:40.282962  344166 pod_ready.go:93] pod "kube-controller-manager-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:40.282995  344166 pod_ready.go:82] duration metric: took 408.449933ms for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:40.283007  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:40.469985  344166 request.go:632] Waited for 186.90725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:04:40.470147  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:04:40.470161  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:40.470194  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:40.470199  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:40.473664  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:40.670784  344166 request.go:632] Waited for 195.637435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:40.670867  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:40.670879  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:40.670888  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:40.670898  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:40.674838  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:40.676400  344166 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:40.676472  344166 pod_ready.go:82] duration metric: took 393.454978ms for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:40.676498  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:40.870229  344166 request.go:632] Waited for 193.630439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:04:40.870349  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m03
	I0831 23:04:40.870360  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:40.870369  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:40.870383  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:40.873808  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:41.070490  344166 request.go:632] Waited for 195.300699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:41.070560  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:41.070571  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:41.070580  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:41.070620  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:41.073767  344166 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0831 23:04:41.074062  344166 pod_ready.go:98] node "ha-330867-m03" hosting pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:41.074095  344166 pod_ready.go:82] duration metric: took 397.572618ms for pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	E0831 23:04:41.074114  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m03" hosting pod "kube-controller-manager-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:41.074124  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:41.275753  344166 request.go:632] Waited for 201.506745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:04:41.275889  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2km6v
	I0831 23:04:41.275898  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:41.275908  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:41.275917  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:41.279005  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:41.470460  344166 request.go:632] Waited for 190.296631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:41.470576  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:41.470590  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:41.470600  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:41.470612  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:41.474236  344166 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0831 23:04:41.474609  344166 pod_ready.go:98] node "ha-330867-m03" hosting pod "kube-proxy-2km6v" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:41.474633  344166 pod_ready.go:82] duration metric: took 400.493734ms for pod "kube-proxy-2km6v" in "kube-system" namespace to be "Ready" ...
	E0831 23:04:41.474660  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m03" hosting pod "kube-proxy-2km6v" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:41.474674  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:41.670910  344166 request.go:632] Waited for 196.156399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:04:41.670985  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:04:41.670996  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:41.671042  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:41.671053  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:41.674199  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:41.870708  344166 request.go:632] Waited for 195.341651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:04:41.870789  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:04:41.870803  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:41.870813  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:41.870835  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:41.875283  344166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:04:41.876627  344166 pod_ready.go:93] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:41.876652  344166 pod_ready.go:82] duration metric: took 401.959441ms for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:41.876673  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:42.070176  344166 request.go:632] Waited for 193.365513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:04:42.070310  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:04:42.070323  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:42.070333  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:42.070341  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:42.077874  344166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:04:42.270624  344166 request.go:632] Waited for 191.329315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:42.270770  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:42.270784  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:42.270799  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:42.270823  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:42.282921  344166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0831 23:04:42.284123  344166 pod_ready.go:93] pod "kube-proxy-72g7x" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:42.284151  344166 pod_ready.go:82] duration metric: took 407.411531ms for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:42.284165  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:42.470006  344166 request.go:632] Waited for 185.740201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:04:42.470090  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:04:42.470097  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:42.470107  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:42.470116  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:42.473858  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:42.670403  344166 request.go:632] Waited for 195.124888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:42.670583  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:42.670594  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:42.670602  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:42.670607  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:42.673935  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:42.675095  344166 pod_ready.go:93] pod "kube-proxy-fzpmn" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:42.675152  344166 pod_ready.go:82] duration metric: took 390.978356ms for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:42.675188  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:42.870464  344166 request.go:632] Waited for 195.178443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:04:42.870665  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:04:42.870675  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:42.870682  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:42.870689  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:42.874481  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:43.070031  344166 request.go:632] Waited for 194.156041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:43.070195  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:04:43.070237  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:43.070262  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:43.070282  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:43.073681  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:43.074878  344166 pod_ready.go:93] pod "kube-scheduler-ha-330867" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:43.074939  344166 pod_ready.go:82] duration metric: took 399.730063ms for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:43.074967  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:43.270331  344166 request.go:632] Waited for 195.253536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:04:43.270476  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:04:43.270497  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:43.270538  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:43.270558  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:43.290952  344166 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0831 23:04:43.470419  344166 request.go:632] Waited for 178.310153ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:43.470534  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:04:43.470572  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:43.470601  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:43.470624  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:43.473896  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:43.475230  344166 pod_ready.go:93] pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:04:43.475292  344166 pod_ready.go:82] duration metric: took 400.304042ms for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:43.475318  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	I0831 23:04:43.670019  344166 request.go:632] Waited for 194.591698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:04:43.670158  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m03
	I0831 23:04:43.670242  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:43.670281  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:43.670303  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:43.673970  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:04:43.869974  344166 request.go:632] Waited for 195.272244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:43.870041  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m03
	I0831 23:04:43.870048  344166 round_trippers.go:469] Request Headers:
	I0831 23:04:43.870056  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:04:43.870061  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:04:43.872707  344166 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0831 23:04:43.872828  344166 pod_ready.go:98] node "ha-330867-m03" hosting pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:43.872840  344166 pod_ready.go:82] duration metric: took 397.50372ms for pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace to be "Ready" ...
	E0831 23:04:43.872851  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867-m03" hosting pod "kube-scheduler-ha-330867-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-330867-m03": nodes "ha-330867-m03" not found
	I0831 23:04:43.872860  344166 pod_ready.go:39] duration metric: took 5.403045315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:04:43.872875  344166 api_server.go:52] waiting for apiserver process to appear ...
	I0831 23:04:43.872938  344166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:04:43.887249  344166 api_server.go:72] duration metric: took 21.571671032s to wait for apiserver process to appear ...
	I0831 23:04:43.887331  344166 api_server.go:88] waiting for apiserver healthz status ...
	I0831 23:04:43.887367  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:43.895206  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:43.895230  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:44.387496  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:44.395251  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:44.395281  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:44.887483  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:44.895212  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:44.895248  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:45.387501  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:45.402035  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:45.402072  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:45.887576  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:45.895652  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:45.895685  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:46.387809  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:46.396666  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:46.396738  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:46.888376  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:46.897893  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:46.897938  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:47.387807  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:47.395471  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:47.395500  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:47.888102  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:47.896042  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:47.896072  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:48.387501  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:48.395221  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:48.395248  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:48.887476  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:48.895352  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:48.895383  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:49.387810  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:49.395453  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:49.395485  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:49.888065  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:49.895920  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:49.895993  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:50.387492  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:50.395186  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:50.395214  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:50.887963  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:50.895601  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:50.895638  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:51.388219  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:51.395910  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:51.395946  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:51.887461  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:51.895459  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:51.895486  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:52.388517  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:52.402770  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:52.402800  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:52.888398  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:52.896504  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:52.896541  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:53.387669  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:53.396030  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:53.396060  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:53.887639  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:53.895835  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:53.895885  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:54.388468  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:54.399526  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:54.399560  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:54.888277  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:54.896331  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:54.896368  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:55.387889  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:55.395561  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:55.395593  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:55.888226  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:55.896146  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:55.896173  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:56.388376  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:56.398111  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:56.398142  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:56.887729  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:56.895485  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:56.895515  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:57.388331  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:57.397131  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:57.397159  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:57.887459  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:57.928740  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:57.928773  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:58.388447  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:58.419387  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:58.419471  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:58.887899  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:58.899335  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:58.899416  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:59.388050  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:59.397380  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:59.397460  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:04:59.888229  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:04:59.898813  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:04:59.898899  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:00.390346  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:00.419281  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:00.419371  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:00.888220  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:00.904817  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:00.904866  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:01.387507  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:01.395640  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:01.395672  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:01.888363  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:01.896190  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:01.896219  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:02.388305  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:02.396359  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:02.396388  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:02.887889  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:02.896238  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:02.896283  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:03.387783  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:03.409091  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:03.409122  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:03.887637  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:03.895856  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:03.895887  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:04.387494  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:04.395558  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:04.395590  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:04.888302  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:04.897399  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:04.897431  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:05.387998  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:05.395931  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:05.395967  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:05.887474  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:05.895490  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:05.895515  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:06.387700  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:06.399629  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:06.399659  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:06.888233  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:06.897763  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:06.897806  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:07.387562  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:07.395570  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:07.395606  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:07.887815  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:07.895768  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:07.895822  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:08.388169  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:08.396484  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:08.396514  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:08.888189  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:08.896475  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:08.896510  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:09.387961  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:09.395746  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:09.395773  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:09.887490  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:09.896108  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:09.896140  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:10.387992  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:10.396859  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:10.396895  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:10.887730  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:10.896543  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:10.896575  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:11.388116  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:11.398082  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:11.398110  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:11.887629  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:11.895906  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:11.895937  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:12.388161  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:12.395895  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:12.395925  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:12.887550  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:12.896618  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:12.896647  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:13.388081  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:13.397022  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:13.397055  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:13.887818  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:13.896568  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:13.896598  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:14.388261  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:14.396300  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:14.396328  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:14.887911  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:14.895986  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:14.896014  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:15.387504  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:15.395604  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:15.395634  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:15.888258  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:15.895849  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:15.895884  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:16.388481  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:16.397689  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:16.397719  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:16.888358  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:16.897379  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:16.897407  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:17.388286  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:17.396160  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:17.396187  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:17.887835  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:17.895820  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:17.895850  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:18.387481  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:18.395795  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:18.395831  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:18.888447  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:18.896065  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:18.896093  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:19.387502  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:19.395201  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:19.395231  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:19.887761  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:19.895808  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:19.895837  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:20.388475  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:20.396818  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:20.396850  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:20.887693  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:20.896849  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:20.896881  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:21.387411  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:21.395897  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:21.395926  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:21.887427  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:21.894961  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:21.894991  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:22.387950  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:05:22.388060  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:05:22.429073  344166 cri.go:89] found id: "92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:22.429099  344166 cri.go:89] found id: "5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:22.429104  344166 cri.go:89] found id: ""
	I0831 23:05:22.429111  344166 logs.go:276] 2 containers: [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0]
	I0831 23:05:22.429167  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.432794  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.437444  344166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:05:22.437514  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:05:22.481654  344166 cri.go:89] found id: "c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:22.481678  344166 cri.go:89] found id: "53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:22.481683  344166 cri.go:89] found id: ""
	I0831 23:05:22.481689  344166 logs.go:276] 2 containers: [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c]
	I0831 23:05:22.481748  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.485241  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.488463  344166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:05:22.488537  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:05:22.525583  344166 cri.go:89] found id: ""
	I0831 23:05:22.525607  344166 logs.go:276] 0 containers: []
	W0831 23:05:22.525616  344166 logs.go:278] No container was found matching "coredns"
	I0831 23:05:22.525622  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:05:22.525681  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:05:22.563984  344166 cri.go:89] found id: "e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:22.564008  344166 cri.go:89] found id: "53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:22.564013  344166 cri.go:89] found id: ""
	I0831 23:05:22.564020  344166 logs.go:276] 2 containers: [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c]
	I0831 23:05:22.564077  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.567848  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.571641  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:05:22.571714  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:05:22.610403  344166 cri.go:89] found id: "26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:22.610433  344166 cri.go:89] found id: ""
	I0831 23:05:22.610442  344166 logs.go:276] 1 containers: [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e]
	I0831 23:05:22.610500  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.614167  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:05:22.614264  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:05:22.654115  344166 cri.go:89] found id: "60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:22.654136  344166 cri.go:89] found id: "8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:22.654141  344166 cri.go:89] found id: ""
	I0831 23:05:22.654148  344166 logs.go:276] 2 containers: [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9]
	I0831 23:05:22.654204  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.657931  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.661660  344166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:05:22.661732  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:05:22.707971  344166 cri.go:89] found id: "b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:22.707993  344166 cri.go:89] found id: ""
	I0831 23:05:22.708001  344166 logs.go:276] 1 containers: [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279]
	I0831 23:05:22.708053  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:22.712665  344166 logs.go:123] Gathering logs for kube-apiserver [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3] ...
	I0831 23:05:22.712687  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:22.766014  344166 logs.go:123] Gathering logs for etcd [53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c] ...
	I0831 23:05:22.766048  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:22.817145  344166 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:05:22.817180  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:05:22.893628  344166 logs.go:123] Gathering logs for kubelet ...
	I0831 23:05:22.893666  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:05:22.972161  344166 logs.go:123] Gathering logs for kube-scheduler [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a] ...
	I0831 23:05:22.972200  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:23.050204  344166 logs.go:123] Gathering logs for kube-controller-manager [8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9] ...
	I0831 23:05:23.050246  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:23.109855  344166 logs.go:123] Gathering logs for container status ...
	I0831 23:05:23.109880  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:05:23.183760  344166 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:05:23.183789  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:05:23.478776  344166 logs.go:123] Gathering logs for etcd [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180] ...
	I0831 23:05:23.478811  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:23.540132  344166 logs.go:123] Gathering logs for kube-scheduler [53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c] ...
	I0831 23:05:23.540162  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:23.581060  344166 logs.go:123] Gathering logs for kube-proxy [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e] ...
	I0831 23:05:23.581088  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:23.620503  344166 logs.go:123] Gathering logs for dmesg ...
	I0831 23:05:23.620532  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:05:23.638134  344166 logs.go:123] Gathering logs for kube-apiserver [5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0] ...
	I0831 23:05:23.638163  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:23.683279  344166 logs.go:123] Gathering logs for kube-controller-manager [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3] ...
	I0831 23:05:23.683308  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:23.762742  344166 logs.go:123] Gathering logs for kindnet [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279] ...
	I0831 23:05:23.762778  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:26.312166  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:26.951004  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0831 23:05:26.951090  344166 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0831 23:05:26.951158  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:05:26.951262  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:05:27.044086  344166 cri.go:89] found id: "92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:27.044163  344166 cri.go:89] found id: "5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:27.044182  344166 cri.go:89] found id: ""
	I0831 23:05:27.044206  344166 logs.go:276] 2 containers: [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0]
	I0831 23:05:27.044299  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.048991  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.055220  344166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:05:27.055303  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:05:27.098549  344166 cri.go:89] found id: "c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:27.098578  344166 cri.go:89] found id: "53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:27.098583  344166 cri.go:89] found id: ""
	I0831 23:05:27.098591  344166 logs.go:276] 2 containers: [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c]
	I0831 23:05:27.098684  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.102425  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.107240  344166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:05:27.107324  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:05:27.148529  344166 cri.go:89] found id: ""
	I0831 23:05:27.148564  344166 logs.go:276] 0 containers: []
	W0831 23:05:27.148575  344166 logs.go:278] No container was found matching "coredns"
	I0831 23:05:27.148581  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:05:27.148652  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:05:27.201465  344166 cri.go:89] found id: "e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:27.201489  344166 cri.go:89] found id: "53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:27.201495  344166 cri.go:89] found id: ""
	I0831 23:05:27.201503  344166 logs.go:276] 2 containers: [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c]
	I0831 23:05:27.201568  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.205568  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.209357  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:05:27.209433  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:05:27.253080  344166 cri.go:89] found id: "26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:27.253103  344166 cri.go:89] found id: ""
	I0831 23:05:27.253118  344166 logs.go:276] 1 containers: [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e]
	I0831 23:05:27.253182  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.257095  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:05:27.257217  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:05:27.304454  344166 cri.go:89] found id: "60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:27.304527  344166 cri.go:89] found id: "8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:27.304563  344166 cri.go:89] found id: ""
	I0831 23:05:27.304590  344166 logs.go:276] 2 containers: [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9]
	I0831 23:05:27.304678  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.308802  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.312487  344166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:05:27.312607  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:05:27.364826  344166 cri.go:89] found id: "b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:27.364898  344166 cri.go:89] found id: ""
	I0831 23:05:27.364920  344166 logs.go:276] 1 containers: [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279]
	I0831 23:05:27.364984  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:27.368539  344166 logs.go:123] Gathering logs for kindnet [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279] ...
	I0831 23:05:27.368566  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:27.407196  344166 logs.go:123] Gathering logs for kubelet ...
	I0831 23:05:27.407225  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:05:27.484631  344166 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:05:27.484669  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:05:27.939824  344166 logs.go:123] Gathering logs for kube-apiserver [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3] ...
	I0831 23:05:27.939870  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:28.000051  344166 logs.go:123] Gathering logs for dmesg ...
	I0831 23:05:28.000085  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:05:28.027926  344166 logs.go:123] Gathering logs for kube-controller-manager [8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9] ...
	I0831 23:05:28.027967  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:28.087069  344166 logs.go:123] Gathering logs for etcd [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180] ...
	I0831 23:05:28.087104  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:28.141306  344166 logs.go:123] Gathering logs for kube-scheduler [53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c] ...
	I0831 23:05:28.141346  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:28.207257  344166 logs.go:123] Gathering logs for kube-controller-manager [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3] ...
	I0831 23:05:28.207293  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:28.286611  344166 logs.go:123] Gathering logs for kube-proxy [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e] ...
	I0831 23:05:28.286648  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:28.330541  344166 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:05:28.330573  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:05:28.402865  344166 logs.go:123] Gathering logs for container status ...
	I0831 23:05:28.403773  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:05:28.449656  344166 logs.go:123] Gathering logs for kube-apiserver [5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0] ...
	I0831 23:05:28.449685  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:28.490130  344166 logs.go:123] Gathering logs for etcd [53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c] ...
	I0831 23:05:28.490159  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:28.550928  344166 logs.go:123] Gathering logs for kube-scheduler [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a] ...
	I0831 23:05:28.551023  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:31.138985  344166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 23:05:31.147924  344166 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 23:05:31.148033  344166 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0831 23:05:31.148047  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:31.148056  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:31.148064  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:31.161163  344166 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0831 23:05:31.161293  344166 api_server.go:141] control plane version: v1.31.0
	I0831 23:05:31.161313  344166 api_server.go:131] duration metric: took 47.273960791s to wait for apiserver health ...
	I0831 23:05:31.161323  344166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 23:05:31.161349  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0831 23:05:31.161416  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0831 23:05:31.206612  344166 cri.go:89] found id: "92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:31.206633  344166 cri.go:89] found id: "5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:31.206638  344166 cri.go:89] found id: ""
	I0831 23:05:31.206644  344166 logs.go:276] 2 containers: [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0]
	I0831 23:05:31.206701  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.210346  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.213714  344166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0831 23:05:31.213785  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0831 23:05:31.252356  344166 cri.go:89] found id: "c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:31.252382  344166 cri.go:89] found id: "53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:31.252388  344166 cri.go:89] found id: ""
	I0831 23:05:31.252395  344166 logs.go:276] 2 containers: [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c]
	I0831 23:05:31.252497  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.256005  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.259441  344166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0831 23:05:31.259547  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0831 23:05:31.296974  344166 cri.go:89] found id: ""
	I0831 23:05:31.297051  344166 logs.go:276] 0 containers: []
	W0831 23:05:31.297068  344166 logs.go:278] No container was found matching "coredns"
	I0831 23:05:31.297075  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0831 23:05:31.297149  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0831 23:05:31.334386  344166 cri.go:89] found id: "e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:31.334409  344166 cri.go:89] found id: "53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:31.334414  344166 cri.go:89] found id: ""
	I0831 23:05:31.334421  344166 logs.go:276] 2 containers: [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c]
	I0831 23:05:31.334504  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.338233  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.341509  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0831 23:05:31.341591  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0831 23:05:31.379962  344166 cri.go:89] found id: "26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:31.379986  344166 cri.go:89] found id: ""
	I0831 23:05:31.379995  344166 logs.go:276] 1 containers: [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e]
	I0831 23:05:31.380064  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.383692  344166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0831 23:05:31.383775  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0831 23:05:31.420725  344166 cri.go:89] found id: "60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:31.420746  344166 cri.go:89] found id: "8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:31.420752  344166 cri.go:89] found id: ""
	I0831 23:05:31.420759  344166 logs.go:276] 2 containers: [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9]
	I0831 23:05:31.420832  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.424480  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.427647  344166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0831 23:05:31.427716  344166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0831 23:05:31.465688  344166 cri.go:89] found id: "b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:31.465720  344166 cri.go:89] found id: ""
	I0831 23:05:31.465729  344166 logs.go:276] 1 containers: [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279]
	I0831 23:05:31.465796  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:31.469408  344166 logs.go:123] Gathering logs for kube-proxy [26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e] ...
	I0831 23:05:31.469438  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a7774a7332b4c5eef4d467ffcd3355cf6f9032b16b3988b5094e6c990b252e"
	I0831 23:05:31.510760  344166 logs.go:123] Gathering logs for kube-apiserver [5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0] ...
	I0831 23:05:31.510800  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c39569c9db07aa1bf9c812f831f8792513253bf9323fbe8fabceccbeeea0ca0"
	I0831 23:05:31.548094  344166 logs.go:123] Gathering logs for etcd [53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c] ...
	I0831 23:05:31.548122  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f03972f47aef41910e6abff8733eec4092f53dab600af0c8b5d18977df3b0c"
	I0831 23:05:31.611438  344166 logs.go:123] Gathering logs for kube-scheduler [53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c] ...
	I0831 23:05:31.611472  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53f68c3c790f0330e219cb5eea94d791afc0054fc373a53a4fbf75649bb69b7c"
	I0831 23:05:31.649406  344166 logs.go:123] Gathering logs for container status ...
	I0831 23:05:31.649489  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0831 23:05:31.700303  344166 logs.go:123] Gathering logs for etcd [c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180] ...
	I0831 23:05:31.700336  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9efeb49a5394d1a1adb4af3472e92d459baaa31cc434f0a1285baa144444180"
	I0831 23:05:31.759067  344166 logs.go:123] Gathering logs for kube-scheduler [e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a] ...
	I0831 23:05:31.759100  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e52aa8b5f7943decde6ee748970e11212f0acb96d0b307504192eada43b4344a"
	I0831 23:05:31.829082  344166 logs.go:123] Gathering logs for describe nodes ...
	I0831 23:05:31.829118  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0831 23:05:32.124324  344166 logs.go:123] Gathering logs for kube-apiserver [92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3] ...
	I0831 23:05:32.124361  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92bd4028dde81f34e2ad3f639e894ff3b117a5fffe297efcb58560121e9a68f3"
	I0831 23:05:32.188564  344166 logs.go:123] Gathering logs for kube-controller-manager [60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3] ...
	I0831 23:05:32.188596  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60ae33a5780a9506abafa648252f519b94a301807bd5e6cb2e680cbef8f221e3"
	I0831 23:05:32.252696  344166 logs.go:123] Gathering logs for kube-controller-manager [8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9] ...
	I0831 23:05:32.252740  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bdc038e6aaf309754e31bc4bacb8d92bff96de26d07f48b63951679d0326ed9"
	I0831 23:05:32.290762  344166 logs.go:123] Gathering logs for kubelet ...
	I0831 23:05:32.290793  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0831 23:05:32.374226  344166 logs.go:123] Gathering logs for dmesg ...
	I0831 23:05:32.374269  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0831 23:05:32.395984  344166 logs.go:123] Gathering logs for kindnet [b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279] ...
	I0831 23:05:32.396023  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9757e6d4f6c2e9aeb8c8c78c4d3f352c690c9bace38f31363705ffd763d3279"
	I0831 23:05:32.439106  344166 logs.go:123] Gathering logs for CRI-O ...
	I0831 23:05:32.439137  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0831 23:05:35.022070  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:05:35.022097  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:35.022107  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:35.022111  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:35.041370  344166 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0831 23:05:35.050848  344166 system_pods.go:59] 19 kube-system pods found
	I0831 23:05:35.050961  344166 system_pods.go:61] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:05:35.050987  344166 system_pods.go:61] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:05:35.051029  344166 system_pods.go:61] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:05:35.051059  344166 system_pods.go:61] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:05:35.051081  344166 system_pods.go:61] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:05:35.051104  344166 system_pods.go:61] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:05:35.051136  344166 system_pods.go:61] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:05:35.051164  344166 system_pods.go:61] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0831 23:05:35.051221  344166 system_pods.go:61] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:05:35.051264  344166 system_pods.go:61] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0831 23:05:35.051283  344166 system_pods.go:61] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:05:35.051306  344166 system_pods.go:61] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:05:35.051339  344166 system_pods.go:61] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:05:35.051365  344166 system_pods.go:61] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:05:35.051384  344166 system_pods.go:61] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:05:35.051406  344166 system_pods.go:61] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:05:35.051440  344166 system_pods.go:61] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:05:35.051464  344166 system_pods.go:61] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:05:35.051483  344166 system_pods.go:61] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:05:35.051507  344166 system_pods.go:74] duration metric: took 3.890172661s to wait for pod list to return data ...
	I0831 23:05:35.051541  344166 default_sa.go:34] waiting for default service account to be created ...
	I0831 23:05:35.051768  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0831 23:05:35.051807  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:35.051839  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:35.051895  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:35.055791  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:35.056049  344166 default_sa.go:45] found service account: "default"
	I0831 23:05:35.056066  344166 default_sa.go:55] duration metric: took 4.50053ms for default service account to be created ...
	I0831 23:05:35.056076  344166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 23:05:35.056140  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:05:35.056146  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:35.056154  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:35.056157  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:35.063983  344166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:05:35.072478  344166 system_pods.go:86] 19 kube-system pods found
	I0831 23:05:35.072522  344166 system_pods.go:89] "coredns-6f6b679f8f-d67w5" [047da125-aee8-40c2-b647-a70792abe582] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:05:35.072560  344166 system_pods.go:89] "coredns-6f6b679f8f-drznk" [d623280c-b5fb-4440-a885-d0a9a14bc995] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0831 23:05:35.072576  344166 system_pods.go:89] "etcd-ha-330867" [3f83b4a1-2cb0-4842-b1fe-851fb3bb9ae5] Running
	I0831 23:05:35.072584  344166 system_pods.go:89] "etcd-ha-330867-m02" [36969a99-6192-4aa7-a072-371da390e418] Running
	I0831 23:05:35.072598  344166 system_pods.go:89] "kindnet-bdzqv" [a399a7b4-f344-4ec3-911e-8c32d75d5067] Running
	I0831 23:05:35.072603  344166 system_pods.go:89] "kindnet-bfwhw" [f422b4a3-3c26-4ea5-8df9-f6c096fdd753] Running
	I0831 23:05:35.072608  344166 system_pods.go:89] "kindnet-fnccr" [a9d2b85c-0746-4a05-a717-6161447fc9d1] Running
	I0831 23:05:35.072619  344166 system_pods.go:89] "kube-apiserver-ha-330867" [fdbb0015-8158-49f3-a4fb-02a878e653da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0831 23:05:35.072642  344166 system_pods.go:89] "kube-apiserver-ha-330867-m02" [8efba5fd-3c97-43c0-b13d-b612c91b93c6] Running
	I0831 23:05:35.072656  344166 system_pods.go:89] "kube-controller-manager-ha-330867" [823105eb-7ed4-4533-9eea-a9ff49b05b6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0831 23:05:35.072672  344166 system_pods.go:89] "kube-controller-manager-ha-330867-m02" [98ec99db-abe2-4e64-912b-ab5ecaf97c5b] Running
	I0831 23:05:35.072685  344166 system_pods.go:89] "kube-proxy-5n584" [ca8e94ba-7c93-4acf-8447-435a472eb72b] Running
	I0831 23:05:35.072690  344166 system_pods.go:89] "kube-proxy-72g7x" [fc8dca69-4778-4bdf-b75c-8f368bcace6d] Running
	I0831 23:05:35.072695  344166 system_pods.go:89] "kube-proxy-fzpmn" [8fc8463c-241a-422f-81fe-56572131cc72] Running
	I0831 23:05:35.072699  344166 system_pods.go:89] "kube-scheduler-ha-330867" [35bdda8a-9c26-44c3-99ac-d4c2adb3dcea] Running
	I0831 23:05:35.072708  344166 system_pods.go:89] "kube-scheduler-ha-330867-m02" [02d4d764-65a0-489f-9878-87e852adcbc4] Running
	I0831 23:05:35.072713  344166 system_pods.go:89] "kube-vip-ha-330867" [411b1533-b4ba-4c36-b2b7-cf2992289028] Running
	I0831 23:05:35.072717  344166 system_pods.go:89] "kube-vip-ha-330867-m02" [04bcfe59-51be-4aed-8c9c-04701c757838] Running
	I0831 23:05:35.072724  344166 system_pods.go:89] "storage-provisioner" [d9f043e6-e0f1-4285-a2e8-0afc18eeeca5] Running
	I0831 23:05:35.072733  344166 system_pods.go:126] duration metric: took 16.65125ms to wait for k8s-apps to be running ...
	I0831 23:05:35.072766  344166 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 23:05:35.072848  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:05:35.092005  344166 system_svc.go:56] duration metric: took 19.228795ms WaitForService to wait for kubelet
	I0831 23:05:35.092033  344166 kubeadm.go:582] duration metric: took 1m12.776460824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:05:35.092056  344166 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:05:35.092137  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0831 23:05:35.092144  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:35.092152  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:35.092158  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:35.096186  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:35.098434  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:05:35.098480  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:05:35.098493  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:05:35.098499  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:05:35.098503  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:05:35.098508  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:05:35.098512  344166 node_conditions.go:105] duration metric: took 6.450927ms to run NodePressure ...
	I0831 23:05:35.098525  344166 start.go:241] waiting for startup goroutines ...
	I0831 23:05:35.098549  344166 start.go:255] writing updated cluster config ...
	I0831 23:05:35.101930  344166 out.go:201] 
	I0831 23:05:35.105926  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:05:35.106093  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:05:35.109264  344166 out.go:177] * Starting "ha-330867-m04" worker node in "ha-330867" cluster
	I0831 23:05:35.112610  344166 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 23:05:35.115465  344166 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 23:05:35.117936  344166 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 23:05:35.117996  344166 cache.go:56] Caching tarball of preloaded images
	I0831 23:05:35.118050  344166 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 23:05:35.118140  344166 preload.go:172] Found /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0831 23:05:35.118182  344166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0831 23:05:35.118327  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	W0831 23:05:35.148979  344166 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 is of wrong architecture
	I0831 23:05:35.148998  344166 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 23:05:35.149094  344166 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 23:05:35.149114  344166 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 23:05:35.149118  344166 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 23:05:35.149126  344166 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 23:05:35.149132  344166 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 23:05:35.150704  344166 image.go:273] response: 
	I0831 23:05:35.281188  344166 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 23:05:35.281229  344166 cache.go:194] Successfully downloaded all kic artifacts
	I0831 23:05:35.281262  344166 start.go:360] acquireMachinesLock for ha-330867-m04: {Name:mk08f642f0ee1abb65ae3ac6825e6c93f3c32dce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 23:05:35.281329  344166 start.go:364] duration metric: took 43.241µs to acquireMachinesLock for "ha-330867-m04"
	I0831 23:05:35.281353  344166 start.go:96] Skipping create...Using existing machine configuration
	I0831 23:05:35.281359  344166 fix.go:54] fixHost starting: m04
	I0831 23:05:35.281633  344166 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 23:05:35.298415  344166 fix.go:112] recreateIfNeeded on ha-330867-m04: state=Stopped err=<nil>
	W0831 23:05:35.298440  344166 fix.go:138] unexpected machine state, will restart: <nil>
	I0831 23:05:35.301587  344166 out.go:177] * Restarting existing docker container for "ha-330867-m04" ...
	I0831 23:05:35.304293  344166 cli_runner.go:164] Run: docker start ha-330867-m04
	I0831 23:05:35.594776  344166 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 23:05:35.619898  344166 kic.go:435] container "ha-330867-m04" state is running.
	I0831 23:05:35.620429  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:05:35.648855  344166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/config.json ...
	I0831 23:05:35.649102  344166 machine.go:93] provisionDockerMachine start ...
	I0831 23:05:35.649161  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:35.673864  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:05:35.674118  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0831 23:05:35.674127  344166 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 23:05:35.676716  344166 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0831 23:05:38.816862  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m04
	
	I0831 23:05:38.816942  344166 ubuntu.go:169] provisioning hostname "ha-330867-m04"
	I0831 23:05:38.817029  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:38.844707  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:05:38.844954  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0831 23:05:38.844965  344166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-330867-m04 && echo "ha-330867-m04" | sudo tee /etc/hostname
	I0831 23:05:38.997403  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-330867-m04
	
	I0831 23:05:38.997517  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:39.028255  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:05:39.028523  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0831 23:05:39.028541  344166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-330867-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-330867-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-330867-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 23:05:39.184562  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 23:05:39.184654  344166 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-277799/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-277799/.minikube}
	I0831 23:05:39.184685  344166 ubuntu.go:177] setting up certificates
	I0831 23:05:39.184727  344166 provision.go:84] configureAuth start
	I0831 23:05:39.184820  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:05:39.226167  344166 provision.go:143] copyHostCerts
	I0831 23:05:39.226209  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:05:39.226244  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem, removing ...
	I0831 23:05:39.226251  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem
	I0831 23:05:39.226328  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/key.pem (1675 bytes)
	I0831 23:05:39.226409  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:05:39.226425  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem, removing ...
	I0831 23:05:39.226429  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem
	I0831 23:05:39.226456  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/ca.pem (1082 bytes)
	I0831 23:05:39.226510  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:05:39.226527  344166 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem, removing ...
	I0831 23:05:39.226532  344166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem
	I0831 23:05:39.226555  344166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-277799/.minikube/cert.pem (1123 bytes)
	I0831 23:05:39.226601  344166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem org=jenkins.ha-330867-m04 san=[127.0.0.1 192.168.49.5 ha-330867-m04 localhost minikube]
	I0831 23:05:39.959705  344166 provision.go:177] copyRemoteCerts
	I0831 23:05:39.959787  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 23:05:39.959849  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:39.978566  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:05:40.104817  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0831 23:05:40.104915  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0831 23:05:40.146570  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0831 23:05:40.146631  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 23:05:40.193677  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0831 23:05:40.193735  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0831 23:05:40.242237  344166 provision.go:87] duration metric: took 1.057483199s to configureAuth
	I0831 23:05:40.242283  344166 ubuntu.go:193] setting minikube options for container-runtime
	I0831 23:05:40.242530  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:05:40.242663  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:40.269471  344166 main.go:141] libmachine: Using SSH client type: native
	I0831 23:05:40.269724  344166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0831 23:05:40.269744  344166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0831 23:05:40.558863  344166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0831 23:05:40.558891  344166 machine.go:96] duration metric: took 4.909779527s to provisionDockerMachine
	I0831 23:05:40.558903  344166 start.go:293] postStartSetup for "ha-330867-m04" (driver="docker")
	I0831 23:05:40.558914  344166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 23:05:40.558978  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 23:05:40.559032  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:40.582975  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:05:40.683061  344166 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 23:05:40.686689  344166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 23:05:40.686727  344166 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 23:05:40.686737  344166 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 23:05:40.686744  344166 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 23:05:40.686755  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/addons for local assets ...
	I0831 23:05:40.686819  344166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-277799/.minikube/files for local assets ...
	I0831 23:05:40.686900  344166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> 2831972.pem in /etc/ssl/certs
	I0831 23:05:40.686911  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /etc/ssl/certs/2831972.pem
	I0831 23:05:40.687020  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0831 23:05:40.697108  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:05:40.730260  344166 start.go:296] duration metric: took 171.341433ms for postStartSetup
	I0831 23:05:40.730367  344166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:05:40.730433  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:40.750777  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:05:40.847819  344166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 23:05:40.852528  344166 fix.go:56] duration metric: took 5.571162019s for fixHost
	I0831 23:05:40.852564  344166 start.go:83] releasing machines lock for "ha-330867-m04", held for 5.571222311s
	I0831 23:05:40.852637  344166 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 23:05:40.873545  344166 out.go:177] * Found network options:
	I0831 23:05:40.875170  344166 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0831 23:05:40.876833  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:05:40.876868  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:05:40.876896  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0831 23:05:40.876910  344166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0831 23:05:40.876986  344166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0831 23:05:40.877033  344166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 23:05:40.877063  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:40.877092  344166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 23:05:40.897746  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:05:40.900715  344166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 23:05:41.189129  344166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 23:05:41.199064  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:05:41.213223  344166 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0831 23:05:41.213307  344166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 23:05:41.224214  344166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0831 23:05:41.224239  344166 start.go:495] detecting cgroup driver to use...
	I0831 23:05:41.224274  344166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 23:05:41.224333  344166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0831 23:05:41.238797  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0831 23:05:41.251201  344166 docker.go:217] disabling cri-docker service (if available) ...
	I0831 23:05:41.251323  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0831 23:05:41.276989  344166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0831 23:05:41.293582  344166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0831 23:05:41.432913  344166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0831 23:05:41.615050  344166 docker.go:233] disabling docker service ...
	I0831 23:05:41.615114  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0831 23:05:41.633307  344166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0831 23:05:41.650073  344166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0831 23:05:41.764042  344166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0831 23:05:41.876058  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0831 23:05:41.892286  344166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 23:05:41.917309  344166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0831 23:05:41.917433  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:41.931390  344166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0831 23:05:41.931505  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:41.943311  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:41.957147  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:41.968798  344166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 23:05:41.980081  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:42.001018  344166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:42.060792  344166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0831 23:05:42.073366  344166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 23:05:42.085124  344166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 23:05:42.096227  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:05:42.227812  344166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0831 23:05:42.380986  344166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0831 23:05:42.381139  344166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0831 23:05:42.386515  344166 start.go:563] Will wait 60s for crictl version
	I0831 23:05:42.386635  344166 ssh_runner.go:195] Run: which crictl
	I0831 23:05:42.391780  344166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 23:05:42.437227  344166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0831 23:05:42.437378  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:05:42.487297  344166 ssh_runner.go:195] Run: crio --version
	I0831 23:05:42.536290  344166 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0831 23:05:42.537553  344166 out.go:177]   - env NO_PROXY=192.168.49.2
	I0831 23:05:42.538993  344166 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0831 23:05:42.540195  344166 cli_runner.go:164] Run: docker network inspect ha-330867 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 23:05:42.554956  344166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 23:05:42.559714  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:05:42.571470  344166 mustload.go:65] Loading cluster: ha-330867
	I0831 23:05:42.571715  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:05:42.571985  344166 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:05:42.591467  344166 host.go:66] Checking if "ha-330867" exists ...
	I0831 23:05:42.591751  344166 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867 for IP: 192.168.49.5
	I0831 23:05:42.591770  344166 certs.go:194] generating shared ca certs ...
	I0831 23:05:42.591785  344166 certs.go:226] acquiring lock for ca certs: {Name:mk25c48345241d49df22687ae20353d5a7b46e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 23:05:42.591899  344166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key
	I0831 23:05:42.591944  344166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key
	I0831 23:05:42.591959  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0831 23:05:42.591977  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0831 23:05:42.591994  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0831 23:05:42.592008  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0831 23:05:42.592065  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem (1338 bytes)
	W0831 23:05:42.592098  344166 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197_empty.pem, impossibly tiny 0 bytes
	I0831 23:05:42.592111  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca-key.pem (1679 bytes)
	I0831 23:05:42.592135  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/ca.pem (1082 bytes)
	I0831 23:05:42.592176  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/cert.pem (1123 bytes)
	I0831 23:05:42.592204  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/key.pem (1675 bytes)
	I0831 23:05:42.592249  344166 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem (1708 bytes)
	I0831 23:05:42.592281  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem -> /usr/share/ca-certificates/283197.pem
	I0831 23:05:42.592299  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem -> /usr/share/ca-certificates/2831972.pem
	I0831 23:05:42.592314  344166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:05:42.592334  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 23:05:42.620378  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0831 23:05:42.647186  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 23:05:42.674142  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0831 23:05:42.709237  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/certs/283197.pem --> /usr/share/ca-certificates/283197.pem (1338 bytes)
	I0831 23:05:42.737544  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/ssl/certs/2831972.pem --> /usr/share/ca-certificates/2831972.pem (1708 bytes)
	I0831 23:05:42.764127  344166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 23:05:42.791462  344166 ssh_runner.go:195] Run: openssl version
	I0831 23:05:42.797209  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831972.pem && ln -fs /usr/share/ca-certificates/2831972.pem /etc/ssl/certs/2831972.pem"
	I0831 23:05:42.807143  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831972.pem
	I0831 23:05:42.812096  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 31 22:51 /usr/share/ca-certificates/2831972.pem
	I0831 23:05:42.812166  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831972.pem
	I0831 23:05:42.823230  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831972.pem /etc/ssl/certs/3ec20f2e.0"
	I0831 23:05:42.832702  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 23:05:42.842341  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:05:42.846244  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:33 /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:05:42.846311  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 23:05:42.853854  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 23:05:42.863392  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283197.pem && ln -fs /usr/share/ca-certificates/283197.pem /etc/ssl/certs/283197.pem"
	I0831 23:05:42.874109  344166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283197.pem
	I0831 23:05:42.878627  344166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 31 22:51 /usr/share/ca-certificates/283197.pem
	I0831 23:05:42.878732  344166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283197.pem
	I0831 23:05:42.885874  344166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283197.pem /etc/ssl/certs/51391683.0"
	I0831 23:05:42.894954  344166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 23:05:42.898936  344166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 23:05:42.898994  344166 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0831 23:05:42.899092  344166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-330867-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-330867 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 23:05:42.899167  344166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 23:05:42.908332  344166 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 23:05:42.908464  344166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0831 23:05:42.918277  344166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0831 23:05:42.937457  344166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 23:05:42.957815  344166 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0831 23:05:42.963336  344166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 23:05:42.976368  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:05:43.096644  344166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:05:43.109390  344166 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0831 23:05:43.109836  344166 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:05:43.111104  344166 out.go:177] * Verifying Kubernetes components...
	I0831 23:05:43.112357  344166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 23:05:43.218258  344166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 23:05:43.232256  344166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:05:43.232637  344166 kapi.go:59] client config for ha-330867: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/profiles/ha-330867/client.key", CAFile:"/home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cbad0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0831 23:05:43.232721  344166 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0831 23:05:43.232959  344166 node_ready.go:35] waiting up to 6m0s for node "ha-330867-m04" to be "Ready" ...
	I0831 23:05:43.233051  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:43.233094  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:43.233121  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:43.233143  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:43.235958  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:43.733274  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:43.733298  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:43.733308  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:43.733312  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:43.736153  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:44.234079  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:44.234102  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:44.234113  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:44.234117  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:44.236979  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:44.733653  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:44.733734  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:44.733749  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:44.733756  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:44.736590  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:45.234078  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:45.234104  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:45.234114  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:45.234121  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:45.241296  344166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:05:45.245481  344166 node_ready.go:53] node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:05:45.733603  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:45.733629  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:45.733639  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:45.733643  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:45.736459  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:46.233309  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:46.233335  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:46.233345  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:46.233349  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:46.236113  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:46.733654  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:46.733679  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:46.733687  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:46.733691  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:46.736859  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:47.233223  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:47.233249  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:47.233259  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:47.233262  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:47.236072  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:47.733632  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:47.733657  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:47.733667  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:47.733672  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:47.736622  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:47.737142  344166 node_ready.go:53] node "ha-330867-m04" has status "Ready":"Unknown"
	I0831 23:05:48.233254  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:48.233382  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:48.233400  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:48.233406  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:48.236214  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:48.733208  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:48.733237  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:48.733248  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:48.733253  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:48.736093  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:49.233193  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:49.233221  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:49.233230  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:49.233236  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:49.236160  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:49.733608  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:05:49.733628  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:49.733638  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:49.733643  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:49.736828  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:49.737965  344166 node_ready.go:49] node "ha-330867-m04" has status "Ready":"True"
	I0831 23:05:49.737987  344166 node_ready.go:38] duration metric: took 6.504995753s for node "ha-330867-m04" to be "Ready" ...
	I0831 23:05:49.737997  344166 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:05:49.738063  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0831 23:05:49.738074  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:49.738083  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:49.738087  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:49.742981  344166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:05:49.751691  344166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:49.751840  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:49.751857  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:49.751866  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:49.751870  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:49.754746  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:49.755773  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:49.755794  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:49.755803  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:49.755809  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:49.758579  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:50.252398  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:50.252440  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:50.252449  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:50.252455  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:50.255327  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:50.256042  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:50.256063  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:50.256072  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:50.256076  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:50.258533  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:50.751926  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:50.751949  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:50.751962  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:50.751966  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:50.754987  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:50.755843  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:50.755858  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:50.755866  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:50.755873  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:50.759364  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:51.252080  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:51.252106  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:51.252116  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:51.252120  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:51.255013  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:51.255798  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:51.255845  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:51.255868  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:51.255890  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:51.258228  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:51.752009  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:51.752034  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:51.752044  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:51.752049  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:51.754886  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:51.755730  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:51.755753  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:51.755763  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:51.755766  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:51.758346  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:51.758946  344166 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:05:52.252536  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:52.252559  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:52.252569  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:52.252572  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:52.255356  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:52.256164  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:52.256185  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:52.256194  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:52.256199  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:52.258728  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:52.752651  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:52.752683  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:52.752696  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:52.752702  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:52.755638  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:52.756564  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:52.756585  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:52.756594  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:52.756598  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:52.759305  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:53.252363  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:53.252387  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:53.252397  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:53.252401  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:53.255190  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:53.256022  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:53.256045  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:53.256055  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:53.256062  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:53.258696  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:53.751998  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:53.752026  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:53.752036  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:53.752040  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:53.755016  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:53.755951  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:53.755969  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:53.755978  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:53.755982  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:53.758598  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:53.759204  344166 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:05:54.251882  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:54.251909  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:54.251918  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:54.251924  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:54.254681  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:54.255361  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:54.255378  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:54.255388  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:54.255392  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:54.257735  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:54.752935  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:54.752958  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:54.752969  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:54.752973  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:54.755811  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:54.756758  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:54.756781  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:54.756790  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:54.756794  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:54.759348  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:55.251956  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:55.251979  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:55.251989  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:55.251993  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:55.254889  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:55.255683  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:55.255705  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:55.255715  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:55.255719  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:55.258412  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:55.752803  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:55.752826  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:55.752836  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:55.752839  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:55.755760  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:55.756745  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:55.756766  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:55.756776  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:55.756794  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:55.759387  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:55.760122  344166 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:05:56.251990  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:56.252015  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:56.252025  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:56.252028  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:56.255743  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:56.256868  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:56.256893  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:56.256902  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:56.256907  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:56.260609  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:56.752059  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:56.752088  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:56.752098  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:56.752103  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:56.755202  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:56.756006  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:56.756030  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:56.756039  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:56.756045  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:56.758896  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:57.252727  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:57.252749  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:57.252758  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:57.252762  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:57.255514  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:57.256535  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:57.256552  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:57.256561  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:57.256566  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:57.260399  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:57.752754  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:57.752775  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:57.752784  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:57.752789  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:57.758900  344166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:05:57.760093  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:57.760110  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:57.760119  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:57.760124  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:57.771667  344166 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 23:05:57.772288  344166 pod_ready.go:103] pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace has status "Ready":"False"
	I0831 23:05:58.252080  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:58.252120  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.252135  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.252141  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.255370  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:58.256271  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:58.256292  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.256302  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.256307  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.259007  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:58.751952  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d67w5
	I0831 23:05:58.751975  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.751986  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.751992  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.769332  344166 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0831 23:05:58.777312  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:58.777336  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.777344  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.777350  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.799029  344166 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0831 23:05:58.806394  344166 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.806439  344166 pod_ready.go:82] duration metric: took 9.054696454s for pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace to be "Ready" ...
	E0831 23:05:58.806454  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-d67w5" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.806467  344166 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.806565  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-drznk
	I0831 23:05:58.806583  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.806600  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.806611  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.820914  344166 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0831 23:05:58.821636  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:58.821655  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.821673  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.821677  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.831188  344166 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0831 23:05:58.831973  344166 pod_ready.go:98] node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.831999  344166 pod_ready.go:82] duration metric: took 25.521201ms for pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace to be "Ready" ...
	E0831 23:05:58.832015  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "coredns-6f6b679f8f-drznk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.832034  344166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.832133  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867
	I0831 23:05:58.832144  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.832152  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.832157  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.839826  344166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:05:58.840686  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:58.840707  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.840722  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.840730  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.849647  344166 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0831 23:05:58.850465  344166 pod_ready.go:98] node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.850499  344166 pod_ready.go:82] duration metric: took 18.457773ms for pod "etcd-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:05:58.850525  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "etcd-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.850538  344166 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.850620  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-330867-m02
	I0831 23:05:58.850632  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.850640  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.850644  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.864660  344166 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0831 23:05:58.865401  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:05:58.865428  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.865437  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.865442  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.871660  344166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0831 23:05:58.872481  344166 pod_ready.go:93] pod "etcd-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:05:58.872502  344166 pod_ready.go:82] duration metric: took 21.956125ms for pod "etcd-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.872532  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.872632  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867
	I0831 23:05:58.872647  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.872656  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.872665  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.880192  344166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0831 23:05:58.881339  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:58.881371  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.881385  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.881393  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.899774  344166 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0831 23:05:58.900582  344166 pod_ready.go:98] node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.900618  344166 pod_ready.go:82] duration metric: took 28.077356ms for pod "kube-apiserver-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:05:58.900660  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-apiserver-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:58.900672  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:58.952966  344166 request.go:632] Waited for 52.192238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:05:58.953040  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867-m02
	I0831 23:05:58.953053  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:58.953061  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:58.953121  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:58.964979  344166 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0831 23:05:59.151978  344166 request.go:632] Waited for 186.097795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:05:59.152086  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:05:59.152100  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:59.152110  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:59.152125  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:59.154996  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:59.155726  344166 pod_ready.go:93] pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:05:59.155747  344166 pod_ready.go:82] duration metric: took 255.066559ms for pod "kube-apiserver-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:59.155759  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:59.352039  344166 request.go:632] Waited for 196.214286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:05:59.352114  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867
	I0831 23:05:59.352129  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:59.352137  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:59.352143  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:59.355677  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:59.552152  344166 request.go:632] Waited for 195.258599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:59.552235  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:05:59.552248  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:59.552261  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:59.552267  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:59.555627  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:59.556335  344166 pod_ready.go:98] node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:59.556362  344166 pod_ready.go:82] duration metric: took 400.593814ms for pod "kube-controller-manager-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:05:59.556378  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-controller-manager-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:05:59.556388  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:59.752073  344166 request.go:632] Waited for 195.568489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:05:59.752143  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-330867-m02
	I0831 23:05:59.752154  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:59.752163  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:59.752171  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:59.755202  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:05:59.952460  344166 request.go:632] Waited for 196.382376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:05:59.952520  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:05:59.952528  344166 round_trippers.go:469] Request Headers:
	I0831 23:05:59.952539  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:05:59.952547  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:05:59.955498  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:05:59.956146  344166 pod_ready.go:93] pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:05:59.956167  344166 pod_ready.go:82] duration metric: took 399.771804ms for pod "kube-controller-manager-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:05:59.956180  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:00.156523  344166 request.go:632] Waited for 200.270889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:06:00.156590  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5n584
	I0831 23:06:00.156597  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:00.156607  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:00.156612  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:00.160786  344166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0831 23:06:00.352489  344166 request.go:632] Waited for 190.196809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:06:00.352568  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m04
	I0831 23:06:00.352576  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:00.352585  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:00.352599  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:00.364981  344166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0831 23:06:00.366532  344166 pod_ready.go:93] pod "kube-proxy-5n584" in "kube-system" namespace has status "Ready":"True"
	I0831 23:06:00.366554  344166 pod_ready.go:82] duration metric: took 410.367285ms for pod "kube-proxy-5n584" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:00.366579  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:00.552475  344166 request.go:632] Waited for 185.819511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:06:00.552594  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-72g7x
	I0831 23:06:00.552609  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:00.552619  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:00.552625  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:00.555946  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:06:00.752940  344166 request.go:632] Waited for 196.036959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:06:00.753007  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:06:00.753017  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:00.753029  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:00.753034  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:00.755972  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:06:00.756613  344166 pod_ready.go:93] pod "kube-proxy-72g7x" in "kube-system" namespace has status "Ready":"True"
	I0831 23:06:00.756634  344166 pod_ready.go:82] duration metric: took 390.045103ms for pod "kube-proxy-72g7x" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:00.756646  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:00.951994  344166 request.go:632] Waited for 195.269332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:06:00.952078  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fzpmn
	I0831 23:06:00.952093  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:00.952104  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:00.952114  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:00.955149  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:06:01.152043  344166 request.go:632] Waited for 196.137537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:06:01.152127  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:06:01.152138  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:01.152154  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:01.152190  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:01.155137  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:06:01.156135  344166 pod_ready.go:98] node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:06:01.156164  344166 pod_ready.go:82] duration metric: took 399.504409ms for pod "kube-proxy-fzpmn" in "kube-system" namespace to be "Ready" ...
	E0831 23:06:01.156176  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-proxy-fzpmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:06:01.156185  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:01.352580  344166 request.go:632] Waited for 196.330077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:06:01.352703  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867
	I0831 23:06:01.352721  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:01.352732  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:01.352738  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:01.356054  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:06:01.551976  344166 request.go:632] Waited for 195.255129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:06:01.552097  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867
	I0831 23:06:01.552126  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:01.552147  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:01.552160  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:01.554960  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:06:01.555934  344166 pod_ready.go:98] node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:06:01.555960  344166 pod_ready.go:82] duration metric: took 399.767652ms for pod "kube-scheduler-ha-330867" in "kube-system" namespace to be "Ready" ...
	E0831 23:06:01.555972  344166 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-330867" hosting pod "kube-scheduler-ha-330867" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-330867" has status "Ready":"Unknown"
	I0831 23:06:01.555980  344166 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:01.752459  344166 request.go:632] Waited for 196.342434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:06:01.752535  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-330867-m02
	I0831 23:06:01.752545  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:01.752558  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:01.752573  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:01.755512  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:06:01.952507  344166 request.go:632] Waited for 196.386307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:06:01.952592  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-330867-m02
	I0831 23:06:01.952602  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:01.952612  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:01.952623  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:01.955695  344166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0831 23:06:01.956327  344166 pod_ready.go:93] pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace has status "Ready":"True"
	I0831 23:06:01.956349  344166 pod_ready.go:82] duration metric: took 400.358468ms for pod "kube-scheduler-ha-330867-m02" in "kube-system" namespace to be "Ready" ...
	I0831 23:06:01.956366  344166 pod_ready.go:39] duration metric: took 12.218357718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 23:06:01.956442  344166 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 23:06:01.956520  344166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:06:01.969700  344166 system_svc.go:56] duration metric: took 13.285229ms WaitForService to wait for kubelet
	I0831 23:06:01.969732  344166 kubeadm.go:582] duration metric: took 18.860294317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 23:06:01.969756  344166 node_conditions.go:102] verifying NodePressure condition ...
	I0831 23:06:02.153054  344166 request.go:632] Waited for 183.112892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0831 23:06:02.153121  344166 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0831 23:06:02.153131  344166 round_trippers.go:469] Request Headers:
	I0831 23:06:02.153140  344166 round_trippers.go:473]     Accept: application/json, */*
	I0831 23:06:02.153147  344166 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0831 23:06:02.156099  344166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0831 23:06:02.157647  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:06:02.157678  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:06:02.157690  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:06:02.157696  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:06:02.157729  344166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 23:06:02.157736  344166 node_conditions.go:123] node cpu capacity is 2
	I0831 23:06:02.157746  344166 node_conditions.go:105] duration metric: took 187.984448ms to run NodePressure ...
	I0831 23:06:02.157759  344166 start.go:241] waiting for startup goroutines ...
	I0831 23:06:02.157814  344166 start.go:255] writing updated cluster config ...
	I0831 23:06:02.158163  344166 ssh_runner.go:195] Run: rm -f paused
	I0831 23:06:02.225957  344166 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 23:06:02.228955  344166 out.go:177] * Done! kubectl is now configured to use "ha-330867" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 31 23:05:28 ha-330867 crio[647]: time="2024-08-31 23:05:28.841565668Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f7abfc361939a9504230c8690ac3b45ee2c0fd3182de52d6bd4a02f0895e748/merged/etc/group: no such file or directory"
	Aug 31 23:05:28 ha-330867 crio[647]: time="2024-08-31 23:05:28.921818275Z" level=info msg="Created container 784e0c055361ad41595fc201f9c76f1d095f56f1f384327fd32f5513d90dc787: kube-system/storage-provisioner/storage-provisioner" id=011140e1-b2d5-41ec-8984-520011bc97bc name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:05:28 ha-330867 crio[647]: time="2024-08-31 23:05:28.922480023Z" level=info msg="Starting container: 784e0c055361ad41595fc201f9c76f1d095f56f1f384327fd32f5513d90dc787" id=aa84ba36-40cf-4fdf-b1c0-0fb0fe996e16 name=/runtime.v1.RuntimeService/StartContainer
	Aug 31 23:05:28 ha-330867 crio[647]: time="2024-08-31 23:05:28.939676518Z" level=info msg="Started container" PID=1844 containerID=784e0c055361ad41595fc201f9c76f1d095f56f1f384327fd32f5513d90dc787 description=kube-system/storage-provisioner/storage-provisioner id=aa84ba36-40cf-4fdf-b1c0-0fb0fe996e16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3b2445ecda5a2308c1662912ef1a5af629b5685dd496a353ed4a3650b296980
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.611925812Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=1beb99c4-0b5d-459c-af5f-da49cc52c1ae name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.612153126Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1beb99c4-0b5d-459c-af5f-da49cc52c1ae name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.612856548Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=eb2d1fe9-3327-4ae0-82b3-99ee13f874a8 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.613024621Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=eb2d1fe9-3327-4ae0-82b3-99ee13f874a8 name=/runtime.v1.ImageService/ImageStatus
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.613671485Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-330867/kube-controller-manager" id=b798af30-dd65-472b-b976-a4c1ebf7b54a name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.613784428Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.687392895Z" level=info msg="Created container 4d04a2e2423041fa8126ae325db89a3154408ded126c558846b9bd8c6f894814: kube-system/kube-controller-manager-ha-330867/kube-controller-manager" id=b798af30-dd65-472b-b976-a4c1ebf7b54a name=/runtime.v1.RuntimeService/CreateContainer
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.687951735Z" level=info msg="Starting container: 4d04a2e2423041fa8126ae325db89a3154408ded126c558846b9bd8c6f894814" id=68658769-c172-4a08-9ee3-87c405b762de name=/runtime.v1.RuntimeService/StartContainer
	Aug 31 23:05:37 ha-330867 crio[647]: time="2024-08-31 23:05:37.700610424Z" level=info msg="Started container" PID=1886 containerID=4d04a2e2423041fa8126ae325db89a3154408ded126c558846b9bd8c6f894814 description=kube-system/kube-controller-manager-ha-330867/kube-controller-manager id=68658769-c172-4a08-9ee3-87c405b762de name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a2294d006c0d779a628d8ea86407f33042339f9e80fce5707a94b08c81f7da
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.645420912Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.660921186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.660976472Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.660993949Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.704714388Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.704751302Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.704767343Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.737358824Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.737397462Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.737413749Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.754491147Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 31 23:05:38 ha-330867 crio[647]: time="2024-08-31 23:05:38.754531803Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4d04a2e242304       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   27 seconds ago       Running             kube-controller-manager   8                   10a2294d006c0       kube-controller-manager-ha-330867
	784e0c055361a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   35 seconds ago       Running             storage-provisioner       4                   e3b2445ecda5a       storage-provisioner
	639eecae7581a       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   36 seconds ago       Running             kube-vip                  3                   b0ca15f8c733b       kube-vip-ha-330867
	0f13729d897c4       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   40 seconds ago       Running             kube-apiserver            4                   56cae89cfd686       kube-apiserver-ha-330867
	28ca444cf15be       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89   About a minute ago   Running             kube-proxy                2                   9a4303aa786ed       kube-proxy-fzpmn
	87e894ac16406       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   e3b2445ecda5a       storage-provisioner
	7af7127c943a7       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   f7c35bf418485       busybox-7dff88458-j8jjz
	22f13e4ae942f       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   00df796bba021       kindnet-bfwhw
	12937a92792cb       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   3f9d158e650a3       coredns-6f6b679f8f-d67w5
	b67c66de01f32       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   8a8725fa48295       coredns-6f6b679f8f-drznk
	48e3322328915       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   About a minute ago   Exited              kube-controller-manager   7                   10a2294d006c0       kube-controller-manager-ha-330867
	014d56afa3db9       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb   About a minute ago   Running             kube-scheduler            2                   4af510d438c41       kube-scheduler-ha-330867
	97d3d24c45607       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   83c9ad7e7e86e       etcd-ha-330867
	2f24ac961e97d       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   About a minute ago   Exited              kube-apiserver            3                   56cae89cfd686       kube-apiserver-ha-330867
	4538bb111f21e       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   b0ca15f8c733b       kube-vip-ha-330867
	
	
	==> coredns [12937a92792cbc9dfe7f1a7048e1b29512f144ab62bab34629add4c066902fec] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35856 - 33477 "HINFO IN 7187382439375051762.1889952805862660318. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031713356s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1359125353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.439) (total time: 30001ms):
	Trace[1359125353]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:05:28.440)
	Trace[1359125353]: [30.001587455s] [30.001587455s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1473931592]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.440) (total time: 30001ms):
	Trace[1473931592]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:05:28.441)
	Trace[1473931592]: [30.001362414s] [30.001362414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[855831748]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.440) (total time: 30001ms):
	Trace[855831748]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:05:28.441)
	Trace[855831748]: [30.00174729s] [30.00174729s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b67c66de01f32ab226d8a80a1edb8315c13704fd030cae0800d9e3fd8c549177] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50252 - 40474 "HINFO IN 4058840437178993957.6407488894931421478. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021469259s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1664133489]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.425) (total time: 30001ms):
	Trace[1664133489]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:05:28.426)
	Trace[1664133489]: [30.001479017s] [30.001479017s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[742287022]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.427) (total time: 30001ms):
	Trace[742287022]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:05:28.429)
	Trace[742287022]: [30.001132106s] [30.001132106s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[60363886]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Aug-2024 23:04:58.428) (total time: 30000ms):
	Trace[60363886]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:05:28.429)
	Trace[60363886]: [30.000817761s] [30.000817761s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-330867
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_55_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:55:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:05:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 31 Aug 2024 23:04:47 +0000   Sat, 31 Aug 2024 23:05:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 31 Aug 2024 23:04:47 +0000   Sat, 31 Aug 2024 23:05:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 31 Aug 2024 23:04:47 +0000   Sat, 31 Aug 2024 23:05:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 31 Aug 2024 23:04:47 +0000   Sat, 31 Aug 2024 23:05:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-330867
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8dac4945095747bfb6ba472e215da5ff
	  System UUID:                b6ed5d7f-ba7e-4438-84cd-2adf679138fc
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8jjz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 coredns-6f6b679f8f-d67w5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-6f6b679f8f-drznk             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-330867                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-bfwhw                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-330867             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-330867    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fzpmn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-330867             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-330867                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 66s                    kube-proxy       
	  Normal   Starting                 4m43s                  kube-proxy       
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-330867 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-330867 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-330867 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-330867 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node ha-330867 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node ha-330867 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node ha-330867 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m38s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m                     node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeNotReady             3m45s                  node-controller  Node ha-330867 status is now: NodeNotReady
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   Starting                 116s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-330867 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-330867 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node ha-330867 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   RegisteredNode           24s                    node-controller  Node ha-330867 event: Registered Node ha-330867 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-330867 status is now: NodeNotReady
	
	
	Name:               ha-330867-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_55_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:55:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:05:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:04:49 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:04:49 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:04:49 +0000   Sat, 31 Aug 2024 22:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:04:49 +0000   Sat, 31 Aug 2024 22:56:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-330867-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6ca07fedc6043b2b8d416044798c01b
	  System UUID:                e586dd77-2a99-41e1-8b8e-c5e85c8b470c
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kj4qn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 etcd-ha-330867-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-bdzqv                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-330867-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-330867-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-72g7x                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-330867-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-330867-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 6m26s                  kube-proxy       
	  Normal   Starting                 4m42s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   NodeHasSufficientPID     6m51s (x7 over 6m51s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m51s (x8 over 6m51s)  kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m51s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m51s (x8 over 6m51s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m36s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           5m                     node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   Starting                 113s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 113s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  113s (x8 over 113s)    kubelet          Node ha-330867-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s (x8 over 113s)    kubelet          Node ha-330867-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s (x7 over 113s)    kubelet          Node ha-330867-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	  Normal   RegisteredNode           24s                    node-controller  Node ha-330867-m02 event: Registered Node ha-330867-m02 in Controller
	
	
	Name:               ha-330867-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-330867-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=ha-330867
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_31T22_58_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-330867-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 23:05:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 23:05:49 +0000   Sat, 31 Aug 2024 23:05:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 23:05:49 +0000   Sat, 31 Aug 2024 23:05:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 23:05:49 +0000   Sat, 31 Aug 2024 23:05:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 23:05:49 +0000   Sat, 31 Aug 2024 23:05:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-330867-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 193ccb26a0254ee7870ea6a3098b574c
	  System UUID:                9496d7a6-36cc-4e01-9c23-720cff5b6faa
	  Boot ID:                    4693db4c-e615-4c4b-bc39-ae1431ae1ebc
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2r2dv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kindnet-fnccr              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m49s
	  kube-system                 kube-proxy-5n584           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m45s                  kube-proxy       
	  Normal   Starting                 8s                     kube-proxy       
	  Normal   Starting                 2m58s                  kube-proxy       
	  Warning  CgroupV1                 7m49s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     7m49s (x2 over 7m49s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m49s (x2 over 7m49s)  kubelet          Node ha-330867-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m49s (x2 over 7m49s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeReady                7m33s                  kubelet          Node ha-330867-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           5m                     node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeNotReady             4m20s                  node-controller  Node ha-330867-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   RegisteredNode           3m41s                  node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   Starting                 3m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m27s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m20s (x7 over 3m27s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m27s)  kubelet          Node ha-330867-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m27s)  kubelet          Node ha-330867-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           77s                    node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeNotReady             37s                    node-controller  Node ha-330867-m04 status is now: NodeNotReady
	  Normal   Starting                 29s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 29s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           24s                    node-controller  Node ha-330867-m04 event: Registered Node ha-330867-m04 in Controller
	  Normal   NodeHasSufficientPID     23s (x7 over 29s)      kubelet          Node ha-330867-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16s (x8 over 29s)      kubelet          Node ha-330867-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 29s)      kubelet          Node ha-330867-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Aug31 20:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014722] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.471263] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.854339] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621095] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 21:33] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug31 21:36] hrtimer: interrupt took 85633258 ns
	[Aug31 22:54] FS-Cache: Duplicate cookie detected
	[  +0.013283] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001012] FS-Cache: O-cookie d=00000000cc966b56{9P.session} n=000000008b7f54ff
	[  +0.001103] FS-Cache: O-key=[10] '34323937323438343432'
	[  +0.000787] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000957] FS-Cache: N-cookie d=00000000cc966b56{9P.session} n=0000000065232866
	[  +0.001123] FS-Cache: N-key=[10] '34323937323438343432'
	
	
	==> etcd [97d3d24c456074d2d097c833b39f8d34ad15bf5fa946e6f85f159a420de3c262] <==
	{"level":"info","ts":"2024-08-31T23:04:38.419428Z","caller":"traceutil/trace.go:171","msg":"trace[963973275] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:2604; }","duration":"3.361126509s","start":"2024-08-31T23:04:35.058294Z","end":"2024-08-31T23:04:38.419421Z","steps":["trace[963973275] 'agreement among raft nodes before linearized reading'  (duration: 3.361060614s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.419478Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:35.058256Z","time spent":"3.361213212s","remote":"127.0.0.1:44646","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-31T23:04:38.419702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.539482216s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2024-08-31T23:04:38.419744Z","caller":"traceutil/trace.go:171","msg":"trace[899575509] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:2604; }","duration":"3.539532054s","start":"2024-08-31T23:04:34.880204Z","end":"2024-08-31T23:04:38.419736Z","steps":["trace[899575509] 'agreement among raft nodes before linearized reading'  (duration: 3.539420867s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.419767Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:34.880159Z","time spent":"3.539601936s","remote":"127.0.0.1:44634","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-08-31T23:04:38.419874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.54259584s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-31T23:04:38.419900Z","caller":"traceutil/trace.go:171","msg":"trace[1894354340] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:2604; }","duration":"3.542622843s","start":"2024-08-31T23:04:34.877271Z","end":"2024-08-31T23:04:38.419894Z","steps":["trace[1894354340] 'agreement among raft nodes before linearized reading'  (duration: 3.54258048s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.419917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:34.877221Z","time spent":"3.54269106s","remote":"127.0.0.1:44606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":29,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-08-31T23:04:38.419884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.366732691s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 ","response":"range_response_count:67 size:60519"}
	{"level":"info","ts":"2024-08-31T23:04:38.419981Z","caller":"traceutil/trace.go:171","msg":"trace[1744101375] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:67; response_revision:2604; }","duration":"3.366833883s","start":"2024-08-31T23:04:35.053139Z","end":"2024-08-31T23:04:38.419973Z","steps":["trace[1744101375] 'agreement among raft nodes before linearized reading'  (duration: 3.366460429s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.420031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:35.053128Z","time spent":"3.366893313s","remote":"127.0.0.1:44606","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":67,"response size":60543,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-31T23:04:38.420058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.554826994s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:3 size:18722"}
	{"level":"info","ts":"2024-08-31T23:04:38.420085Z","caller":"traceutil/trace.go:171","msg":"trace[818786440] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:2604; }","duration":"3.554857304s","start":"2024-08-31T23:04:34.865222Z","end":"2024-08-31T23:04:38.420079Z","steps":["trace[818786440] 'agreement among raft nodes before linearized reading'  (duration: 3.554778683s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.420103Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:34.865184Z","time spent":"3.554913731s","remote":"127.0.0.1:44442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18746,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	{"level":"warn","ts":"2024-08-31T23:04:38.420207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.203912944s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-urmuzqbergf4ma3gcd2p7ujace\" ","response":"range_response_count:1 size:688"}
	{"level":"info","ts":"2024-08-31T23:04:38.420234Z","caller":"traceutil/trace.go:171","msg":"trace[1210949995] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-urmuzqbergf4ma3gcd2p7ujace; range_end:; response_count:1; response_revision:2604; }","duration":"4.20393706s","start":"2024-08-31T23:04:34.216288Z","end":"2024-08-31T23:04:38.420225Z","steps":["trace[1210949995] 'agreement among raft nodes before linearized reading'  (duration: 4.203896748s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.420251Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:34.216248Z","time spent":"4.203998565s","remote":"127.0.0.1:44532","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":712,"request content":"key:\"/registry/leases/kube-system/apiserver-urmuzqbergf4ma3gcd2p7ujace\" "}
	{"level":"warn","ts":"2024-08-31T23:04:38.420387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"4.35014199s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-330867-m02\" ","response":"range_response_count:1 size:6350"}
	{"level":"info","ts":"2024-08-31T23:04:38.422682Z","caller":"traceutil/trace.go:171","msg":"trace[2118157856] range","detail":"{range_begin:/registry/minions/ha-330867-m02; range_end:; response_count:1; response_revision:2604; }","duration":"4.352427582s","start":"2024-08-31T23:04:34.070238Z","end":"2024-08-31T23:04:38.422666Z","steps":["trace[2118157856] 'agreement among raft nodes before linearized reading'  (duration: 4.350102302s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.422748Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:34.064514Z","time spent":"4.35821411s","remote":"127.0.0.1:44442","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":6374,"request content":"key:\"/registry/minions/ha-330867-m02\" "}
	{"level":"info","ts":"2024-08-31T23:04:38.412323Z","caller":"traceutil/trace.go:171","msg":"trace[557854874] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:2604; }","duration":"3.345530712s","start":"2024-08-31T23:04:35.066782Z","end":"2024-08-31T23:04:38.412313Z","steps":["trace[557854874] 'agreement among raft nodes before linearized reading'  (duration: 3.341552256s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.423911Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:35.066766Z","time spent":"3.357120648s","remote":"127.0.0.1:44712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":8,"response size":5443,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-31T23:04:38.420389Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.367314833s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 ","response":"range_response_count:2 size:912"}
	{"level":"info","ts":"2024-08-31T23:04:38.424148Z","caller":"traceutil/trace.go:171","msg":"trace[1816455838] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2604; }","duration":"3.371070357s","start":"2024-08-31T23:04:35.053063Z","end":"2024-08-31T23:04:38.424134Z","steps":["trace[1816455838] 'agreement among raft nodes before linearized reading'  (duration: 3.367266201s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-31T23:04:38.424177Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-31T23:04:35.053052Z","time spent":"3.371115403s","remote":"127.0.0.1:44634","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":2,"response size":936,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	
	
	==> kernel <==
	 23:06:05 up  2:48,  0 users,  load average: 4.68, 3.48, 2.42
	Linux ha-330867 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [22f13e4ae942faea8efd3ca7dbe23bed301d0cade63201ce76b1825ca40b596f] <==
	Trace[1761649363]: [30.072740653s] [30.072740653s] END
	E0831 23:05:28.715034       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0831 23:05:30.042191       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0831 23:05:30.042334       1 metrics.go:61] Registering metrics
	I0831 23:05:30.042461       1 controller.go:374] Syncing nftables rules
	I0831 23:05:38.644555       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:05:38.644640       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:38.644924       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0831 23:05:38.645024       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:05:38.645049       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:05:38.645104       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0831 23:05:38.645161       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:05:38.645174       1 main.go:299] handling current node
	I0831 23:05:48.647629       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:05:48.647663       1 main.go:299] handling current node
	I0831 23:05:48.647678       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:05:48.647684       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:48.647785       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:05:48.647797       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	I0831 23:05:58.642064       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0831 23:05:58.642218       1 main.go:299] handling current node
	I0831 23:05:58.642259       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0831 23:05:58.642305       1 main.go:322] Node ha-330867-m02 has CIDR [10.244.1.0/24] 
	I0831 23:05:58.642646       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0831 23:05:58.642865       1 main.go:322] Node ha-330867-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0f13729d897c478fdc1258730fc216cedaa85e59792f8f66089847d662535ae2] <==
	I0831 23:05:26.506568       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:05:26.520547       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:05:26.521040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0831 23:05:26.521103       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0831 23:05:26.897633       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:05:26.918269       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:05:26.918311       1 policy_source.go:224] refreshing policies
	I0831 23:05:26.920020       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:05:26.924157       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:05:26.926542       1 shared_informer.go:320] Caches are synced for configmaps
	I0831 23:05:26.926624       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:05:26.927909       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:05:26.928028       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:05:26.928063       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:05:26.945699       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:05:26.945731       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:05:26.946012       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:05:26.946036       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:05:26.946043       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:05:26.946049       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:05:26.947418       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:05:27.511257       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 23:05:28.163333       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0831 23:05:28.164956       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:05:28.180539       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [2f24ac961e97dda8290c8280b98a810aed8f8b8bbf6019fe0443bccf56b2b05d] <==
	W0831 23:04:38.394757       1 reflector.go:561] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed
	E0831 23:04:38.394792       1 cacher.go:478] cacher (customresourcedefinitions.apiextensions.k8s.io): unexpected ListAndWatch error: failed to list *apiextensions.CustomResourceDefinition: etcdserver: leader changed; reinitializing...
	I0831 23:04:38.477303       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0831 23:04:38.477807       1 aggregator.go:171] initial CRD sync complete...
	I0831 23:04:38.477865       1 autoregister_controller.go:144] Starting autoregister controller
	I0831 23:04:38.477895       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0831 23:04:38.502519       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0831 23:04:38.519659       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0831 23:04:38.557237       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0831 23:04:38.574914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0831 23:04:38.575011       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0831 23:04:38.575126       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0831 23:04:38.577081       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0831 23:04:38.577159       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0831 23:04:38.577749       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0831 23:04:38.578293       1 cache.go:39] Caches are synced for autoregister controller
	I0831 23:04:38.580320       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0831 23:04:38.580387       1 policy_source.go:224] refreshing policies
	I0831 23:04:38.583241       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0831 23:04:38.601436       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0831 23:04:38.621635       1 controller.go:615] quota admission added evaluator for: endpoints
	I0831 23:04:38.628700       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0831 23:04:38.632063       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0831 23:04:39.079720       1 shared_informer.go:320] Caches are synced for configmaps
	F0831 23:05:22.875308       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [48e3322328915c30135a46acc7bd5052271d8973f65b31f2f5f1f2cbac2a472f] <==
	I0831 23:05:00.867920       1 serving.go:386] Generated self-signed cert in-memory
	I0831 23:05:01.346114       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0831 23:05:01.346150       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:05:01.347663       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0831 23:05:01.347847       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0831 23:05:01.347960       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0831 23:05:01.348056       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0831 23:05:11.365540       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [4d04a2e2423041fa8126ae325db89a3154408ded126c558846b9bd8c6f894814] <==
	I0831 23:05:41.499023       1 shared_informer.go:320] Caches are synced for job
	I0831 23:05:41.550282       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:05:41.560529       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0831 23:05:41.566188       1 shared_informer.go:320] Caches are synced for attach detach
	I0831 23:05:41.600977       1 shared_informer.go:320] Caches are synced for persistent volume
	I0831 23:05:41.616481       1 shared_informer.go:320] Caches are synced for PV protection
	I0831 23:05:41.624674       1 shared_informer.go:320] Caches are synced for resource quota
	I0831 23:05:41.633492       1 shared_informer.go:320] Caches are synced for resource quota
	I0831 23:05:42.098538       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 23:05:42.115518       1 shared_informer.go:320] Caches are synced for garbage collector
	I0831 23:05:42.115575       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0831 23:05:49.337082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:05:49.337593       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-330867-m04"
	I0831 23:05:49.350348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:05:51.492877       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867-m04"
	I0831 23:05:56.536765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.723µs"
	I0831 23:05:57.761421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.570177ms"
	I0831 23:05:57.762506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.183µs"
	I0831 23:05:58.763082       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-330867-m04"
	I0831 23:05:58.763203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867"
	I0831 23:05:58.783492       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867"
	I0831 23:05:58.883140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.314766ms"
	I0831 23:05:58.883598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.6µs"
	I0831 23:06:01.549345       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867"
	I0831 23:06:04.061225       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-330867"
	
	
	==> kube-proxy [28ca444cf15be3f7d7937fd15b0d1bd6c82f5a593c677c5499a6023a1356b9ed] <==
	I0831 23:04:58.584686       1 server_linux.go:66] "Using iptables proxy"
	I0831 23:04:58.719772       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 23:04:58.719926       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 23:04:58.909943       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 23:04:58.910098       1 server_linux.go:169] "Using iptables Proxier"
	I0831 23:04:58.972594       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 23:04:58.973187       1 server.go:483] "Version info" version="v1.31.0"
	I0831 23:04:58.973251       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 23:04:58.983200       1 config.go:197] "Starting service config controller"
	I0831 23:04:58.983309       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 23:04:58.983375       1 config.go:104] "Starting endpoint slice config controller"
	I0831 23:04:58.983403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 23:04:58.998580       1 config.go:326] "Starting node config controller"
	I0831 23:04:58.998650       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 23:04:59.084293       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 23:04:59.084452       1 shared_informer.go:320] Caches are synced for service config
	I0831 23:04:59.099160       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [014d56afa3db9688a085c8c7d4e74ca606d199a255a9f5bf4b630db360b69f0b] <==
	W0831 23:04:31.828358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 23:04:31.828426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 23:04:32.181384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 23:04:32.181435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:04:32.186054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0831 23:04:32.186101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:04:32.815520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 23:04:32.815566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 23:04:38.354500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0831 23:04:38.354558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 23:04:44.149851       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0831 23:05:26.846552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:53754->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:53888->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:53848->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:53832->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:53830->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:53822->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:53806->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:53798->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.846983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:53794->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.847047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:53782->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.847105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:53774->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.847166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:53764->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.847210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:53874->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0831 23:05:26.847254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:53858->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 31 23:05:14 ha-330867 kubelet[762]: E0831 23:05:14.736848     762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-330867_kube-system(1cf3c5f5c933a746c10ef12e7d6d5c5d)\"" pod="kube-system/kube-controller-manager-ha-330867" podUID="1cf3c5f5c933a746c10ef12e7d6d5c5d"
	Aug 31 23:05:19 ha-330867 kubelet[762]: E0831 23:05:19.611263     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145519611060108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:19 ha-330867 kubelet[762]: E0831 23:05:19.611295     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145519611060108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:23 ha-330867 kubelet[762]: I0831 23:05:23.806750     762 scope.go:117] "RemoveContainer" containerID="2f24ac961e97dda8290c8280b98a810aed8f8b8bbf6019fe0443bccf56b2b05d"
	Aug 31 23:05:23 ha-330867 kubelet[762]: I0831 23:05:23.807658     762 status_manager.go:851] "Failed to get status for pod" podUID="cf3ee01affeaae0e79f11b427bc3732c" pod="kube-system/kube-apiserver-ha-330867" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-330867\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Aug 31 23:05:23 ha-330867 kubelet[762]: E0831 23:05:23.809690     762 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-330867.17f0f2a6f768a869\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-330867.17f0f2a6f768a869  kube-system   2779 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-330867,UID:cf3ee01affeaae0e79f11b427bc3732c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-330867,},FirstTimestamp:2024-08-31 23:04:16 +0000 UTC,LastTimestamp:2024-08-31 23:05:23.808964392 +0000 UTC m=+74.403601406,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-330867,}"
	Aug 31 23:05:26 ha-330867 kubelet[762]: I0831 23:05:26.611100     762 scope.go:117] "RemoveContainer" containerID="48e3322328915c30135a46acc7bd5052271d8973f65b31f2f5f1f2cbac2a472f"
	Aug 31 23:05:26 ha-330867 kubelet[762]: E0831 23:05:26.611282     762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-330867_kube-system(1cf3c5f5c933a746c10ef12e7d6d5c5d)\"" pod="kube-system/kube-controller-manager-ha-330867" podUID="1cf3c5f5c933a746c10ef12e7d6d5c5d"
	Aug 31 23:05:26 ha-330867 kubelet[762]: E0831 23:05:26.884600     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:37378->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 31 23:05:26 ha-330867 kubelet[762]: E0831 23:05:26.884776     762 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:37340->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 31 23:05:26 ha-330867 kubelet[762]: E0831 23:05:26.885546     762 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:37380->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Aug 31 23:05:27 ha-330867 kubelet[762]: I0831 23:05:27.815983     762 scope.go:117] "RemoveContainer" containerID="4538bb111f21e6a8d7301d2c17bdde81e8500a2983e4bab7d38b124c9afcb224"
	Aug 31 23:05:28 ha-330867 kubelet[762]: I0831 23:05:28.820596     762 scope.go:117] "RemoveContainer" containerID="87e894ac16406489cd164a9fd2e9d50669a86637a9c2bf82884de379cabc570c"
	Aug 31 23:05:29 ha-330867 kubelet[762]: E0831 23:05:29.612916     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145529612584976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:29 ha-330867 kubelet[762]: E0831 23:05:29.612947     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145529612584976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:37 ha-330867 kubelet[762]: I0831 23:05:37.611279     762 scope.go:117] "RemoveContainer" containerID="48e3322328915c30135a46acc7bd5052271d8973f65b31f2f5f1f2cbac2a472f"
	Aug 31 23:05:38 ha-330867 kubelet[762]: E0831 23:05:38.188248     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-330867?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 31 23:05:39 ha-330867 kubelet[762]: E0831 23:05:39.615352     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145539615199079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:39 ha-330867 kubelet[762]: E0831 23:05:39.615391     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145539615199079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:48 ha-330867 kubelet[762]: E0831 23:05:48.189002     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-330867?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 31 23:05:49 ha-330867 kubelet[762]: E0831 23:05:49.617118     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145549616918968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:49 ha-330867 kubelet[762]: E0831 23:05:49.617150     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145549616918968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:58 ha-330867 kubelet[762]: E0831 23:05:58.189833     762 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-330867?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 31 23:05:59 ha-330867 kubelet[762]: E0831 23:05:59.618175     762 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145559617984788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 31 23:05:59 ha-330867 kubelet[762]: E0831 23:05:59.618214     762 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725145559617984788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-330867 -n ha-330867
helpers_test.go:262: (dbg) Run:  kubectl --context ha-330867 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:286: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:287: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (125.10s)

                                                
                                    

Test pass (303/338)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.18
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.1
18 TestDownloadOnly/v1.31.0/DeleteAll 0.22
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 207.61
31 TestAddons/serial/GCPAuth/Namespaces 0.29
35 TestAddons/parallel/InspektorGadget 11.86
39 TestAddons/parallel/CSI 39.5
40 TestAddons/parallel/Headlamp 18.3
41 TestAddons/parallel/CloudSpanner 6.73
42 TestAddons/parallel/LocalPath 8.4
43 TestAddons/parallel/NvidiaDevicePlugin 6.54
44 TestAddons/parallel/Yakd 11.78
45 TestAddons/StoppedEnableDisable 12.19
46 TestCertOptions 40.13
47 TestCertExpiration 249.48
49 TestForceSystemdFlag 34.31
50 TestForceSystemdEnv 39.27
56 TestErrorSpam/setup 34.04
57 TestErrorSpam/start 0.79
58 TestErrorSpam/status 1.21
59 TestErrorSpam/pause 1.83
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.31
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 29.51
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 11.81
73 TestFunctional/serial/CacheCmd/cache/add_local 1.46
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.28
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 37.67
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.69
84 TestFunctional/serial/LogsFileCmd 1.75
85 TestFunctional/serial/InvalidService 4.12
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 12.2
89 TestFunctional/parallel/DryRun 0.45
90 TestFunctional/parallel/InternationalLanguage 0.23
91 TestFunctional/parallel/StatusCmd 1.27
95 TestFunctional/parallel/ServiceCmdConnect 11.72
96 TestFunctional/parallel/AddonsCmd 0.21
97 TestFunctional/parallel/PersistentVolumeClaim 27.12
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.33
102 TestFunctional/parallel/FileSync 0.41
103 TestFunctional/parallel/CertSync 2.2
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
111 TestFunctional/parallel/License 0.25
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
125 TestFunctional/parallel/ServiceCmd/List 0.58
126 TestFunctional/parallel/ProfileCmd/profile_list 0.5
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.64
130 TestFunctional/parallel/MountCmd/any-port 8.74
131 TestFunctional/parallel/ServiceCmd/Format 0.38
132 TestFunctional/parallel/ServiceCmd/URL 0.46
133 TestFunctional/parallel/MountCmd/specific-port 2.6
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.57
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
142 TestFunctional/parallel/ImageCommands/Setup 0.72
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.68
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.59
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 178.31
160 TestMultiControlPlane/serial/DeployApp 9.31
161 TestMultiControlPlane/serial/PingHostFromPods 1.61
162 TestMultiControlPlane/serial/AddWorkerNode 35.98
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.41
166 TestMultiControlPlane/serial/StopSecondaryNode 12.74
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
168 TestMultiControlPlane/serial/RestartSecondaryNode 23.89
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.64
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 205.76
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
173 TestMultiControlPlane/serial/StopCluster 35.81
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 74.94
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.77
181 TestJSONOutput/start/Command 49.38
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.78
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.87
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 37.95
207 TestKicCustomNetwork/use_default_bridge_network 34.87
208 TestKicExistingNetwork 33
209 TestKicCustomSubnet 35.43
210 TestKicStaticIP 38.72
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 71.88
215 TestMountStart/serial/StartWithMountFirst 9.39
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 7.3
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.94
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestContainerIPsMultiNetwork/serial/CreateExtnet 0.07
227 TestContainerIPsMultiNetwork/serial/FreshStart 49.51
228 TestContainerIPsMultiNetwork/serial/ConnectExtnet 0.1
229 TestContainerIPsMultiNetwork/serial/Stop 6.09
230 TestContainerIPsMultiNetwork/serial/VerifyStatus 0.07
231 TestContainerIPsMultiNetwork/serial/Start 19.14
232 TestContainerIPsMultiNetwork/serial/VerifyNetworks 0.02
233 TestContainerIPsMultiNetwork/serial/Delete 2.53
234 TestContainerIPsMultiNetwork/serial/DeleteExtnet 0.1
235 TestContainerIPsMultiNetwork/serial/VerifyDeletedResources 0.11
238 TestMultiNode/serial/FreshStart2Nodes 83
239 TestMultiNode/serial/DeployApp2Nodes 6.17
240 TestMultiNode/serial/PingHostFrom2Pods 0.98
241 TestMultiNode/serial/AddNode 29.77
242 TestMultiNode/serial/MultiNodeLabels 0.1
243 TestMultiNode/serial/ProfileList 0.32
244 TestMultiNode/serial/CopyFile 10.37
245 TestMultiNode/serial/StopNode 2.27
246 TestMultiNode/serial/StartAfterStop 10.39
247 TestMultiNode/serial/RestartKeepsNodes 115.95
248 TestMultiNode/serial/DeleteNode 5.59
249 TestMultiNode/serial/StopMultiNode 23.89
250 TestMultiNode/serial/RestartMultiNode 47.54
251 TestMultiNode/serial/ValidateNameConflict 37.44
256 TestPreload 132.19
258 TestScheduledStopUnix 105.01
261 TestInsufficientStorage 10.72
262 TestRunningBinaryUpgrade 75.3
264 TestKubernetesUpgrade 389.8
265 TestMissingContainerUpgrade 169.96
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
268 TestNoKubernetes/serial/StartWithK8s 38.67
269 TestNoKubernetes/serial/StartWithStopK8s 8.96
270 TestNoKubernetes/serial/Start 9.12
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
272 TestNoKubernetes/serial/ProfileList 1.11
273 TestNoKubernetes/serial/Stop 1.32
274 TestNoKubernetes/serial/StartNoArgs 7.38
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
276 TestStoppedBinaryUpgrade/Setup 0.68
277 TestStoppedBinaryUpgrade/Upgrade 113.02
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
287 TestPause/serial/Start 53.65
288 TestPause/serial/SecondStartNoReconfiguration 124.88
296 TestNetworkPlugins/group/false 3.63
300 TestPause/serial/Pause 0.93
301 TestPause/serial/VerifyStatus 0.31
302 TestPause/serial/Unpause 0.96
303 TestPause/serial/PauseAgain 1.22
304 TestPause/serial/DeletePaused 5.08
305 TestPause/serial/VerifyDeletedResources 0.56
307 TestStartStop/group/old-k8s-version/serial/FirstStart 158.17
308 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
310 TestStartStop/group/old-k8s-version/serial/Stop 12.06
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
312 TestStartStop/group/old-k8s-version/serial/SecondStart 153.89
314 TestStartStop/group/no-preload/serial/FirstStart 70.16
315 TestStartStop/group/no-preload/serial/DeployApp 10.4
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
317 TestStartStop/group/no-preload/serial/Stop 12.02
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
319 TestStartStop/group/no-preload/serial/SecondStart 277.66
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/old-k8s-version/serial/Pause 3.22
325 TestStartStop/group/embed-certs/serial/FirstStart 61.74
326 TestStartStop/group/embed-certs/serial/DeployApp 10.64
327 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
328 TestStartStop/group/embed-certs/serial/Stop 12.06
329 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
330 TestStartStop/group/embed-certs/serial/SecondStart 267.87
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
334 TestStartStop/group/no-preload/serial/Pause 3.23
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.49
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
339 TestStartStop/group/default-k8s-diff-port/serial/Stop 12
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.83
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
345 TestStartStop/group/embed-certs/serial/Pause 3.16
347 TestStartStop/group/newest-cni/serial/FirstStart 35.18
348 TestStartStop/group/newest-cni/serial/DeployApp 0
349 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
350 TestStartStop/group/newest-cni/serial/Stop 1.3
351 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
352 TestStartStop/group/newest-cni/serial/SecondStart 18.51
353 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
356 TestStartStop/group/newest-cni/serial/Pause 3.3
357 TestNetworkPlugins/group/auto/Start 54.69
358 TestNetworkPlugins/group/auto/KubeletFlags 0.29
359 TestNetworkPlugins/group/auto/NetCatPod 11.3
360 TestNetworkPlugins/group/auto/DNS 0.19
361 TestNetworkPlugins/group/auto/Localhost 0.15
362 TestNetworkPlugins/group/auto/HairPin 0.15
363 TestNetworkPlugins/group/kindnet/Start 52.99
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
366 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
367 TestNetworkPlugins/group/kindnet/DNS 0.18
368 TestNetworkPlugins/group/kindnet/Localhost 0.17
369 TestNetworkPlugins/group/kindnet/HairPin 0.16
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
372 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
373 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.06
374 TestNetworkPlugins/group/calico/Start 75.43
375 TestNetworkPlugins/group/custom-flannel/Start 65.27
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
378 TestNetworkPlugins/group/calico/ControllerPod 6.01
379 TestNetworkPlugins/group/calico/KubeletFlags 0.28
380 TestNetworkPlugins/group/calico/NetCatPod 12.26
381 TestNetworkPlugins/group/custom-flannel/DNS 0.25
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
384 TestNetworkPlugins/group/calico/DNS 0.29
385 TestNetworkPlugins/group/calico/Localhost 0.21
386 TestNetworkPlugins/group/calico/HairPin 0.22
387 TestNetworkPlugins/group/enable-default-cni/Start 80.44
388 TestNetworkPlugins/group/flannel/Start 63.81
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
392 TestNetworkPlugins/group/flannel/NetCatPod 13.39
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
397 TestNetworkPlugins/group/flannel/DNS 0.18
398 TestNetworkPlugins/group/flannel/Localhost 0.16
399 TestNetworkPlugins/group/flannel/HairPin 0.17
400 TestNetworkPlugins/group/bridge/Start 69.12
401 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
402 TestNetworkPlugins/group/bridge/NetCatPod 11.28
403 TestNetworkPlugins/group/bridge/DNS 0.17
404 TestNetworkPlugins/group/bridge/Localhost 0.16
405 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (7.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-847558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-847558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.272498393s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-847558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-847558: exit status 85 (68.08454ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-847558 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |          |
	|         | -p download-only-847558        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:32:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:32:17.156028  283202 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:17.156251  283202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:17.156279  283202 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:17.156298  283202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:17.156625  283202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	W0831 22:32:17.156815  283202 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-277799/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-277799/.minikube/config/config.json: no such file or directory
	I0831 22:32:17.157335  283202 out.go:352] Setting JSON to true
	I0831 22:32:17.158247  283202 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8086,"bootTime":1725135452,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:32:17.158350  283202 start.go:139] virtualization:  
	I0831 22:32:17.161578  283202 out.go:97] [download-only-847558] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0831 22:32:17.161742  283202 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:32:17.161803  283202 notify.go:220] Checking for updates...
	I0831 22:32:17.163932  283202 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:32:17.166248  283202 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:32:17.168387  283202 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:32:17.170744  283202 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:32:17.173202  283202 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:32:17.177763  283202 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:32:17.178059  283202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:32:17.208542  283202 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:32:17.208712  283202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:17.263941  283202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:32:17.254402864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:17.264057  283202 docker.go:307] overlay module found
	I0831 22:32:17.266776  283202 out.go:97] Using the docker driver based on user configuration
	I0831 22:32:17.266816  283202 start.go:297] selected driver: docker
	I0831 22:32:17.266831  283202 start.go:901] validating driver "docker" against <nil>
	I0831 22:32:17.266959  283202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:17.322475  283202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:32:17.313462695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:17.322643  283202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:32:17.322924  283202 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:32:17.323083  283202 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:32:17.325803  283202 out.go:169] Using Docker driver with root privileges
	I0831 22:32:17.328387  283202 cni.go:84] Creating CNI manager for ""
	I0831 22:32:17.328432  283202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:32:17.328453  283202 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:32:17.328542  283202 start.go:340] cluster config:
	{Name:download-only-847558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-847558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:32:17.331233  283202 out.go:97] Starting "download-only-847558" primary control-plane node in "download-only-847558" cluster
	I0831 22:32:17.331268  283202 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 22:32:17.334013  283202 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:32:17.334048  283202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 22:32:17.334218  283202 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:32:17.349139  283202 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:17.349723  283202 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:32:17.349823  283202 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:17.400097  283202 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:17.400123  283202 cache.go:56] Caching tarball of preloaded images
	I0831 22:32:17.400753  283202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0831 22:32:17.403702  283202 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 22:32:17.403722  283202 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0831 22:32:17.485068  283202 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-847558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-847558
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-030884 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-030884 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.181618743s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-030884
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-030884: exit status 85 (103.550978ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-847558 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | -p download-only-847558        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| delete  | -p download-only-847558        | download-only-847558 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC | 31 Aug 24 22:32 UTC |
	| start   | -o=json --download-only        | download-only-030884 | jenkins | v1.33.1 | 31 Aug 24 22:32 UTC |                     |
	|         | -p download-only-030884        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:32:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:32:24.843889  283401 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:32:24.844033  283401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:24.844045  283401 out.go:358] Setting ErrFile to fd 2...
	I0831 22:32:24.844050  283401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:32:24.844295  283401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:32:24.844756  283401 out.go:352] Setting JSON to true
	I0831 22:32:24.845643  283401 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8093,"bootTime":1725135452,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:32:24.845713  283401 start.go:139] virtualization:  
	I0831 22:32:24.847789  283401 out.go:97] [download-only-030884] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:32:24.848031  283401 notify.go:220] Checking for updates...
	I0831 22:32:24.849196  283401 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:32:24.850483  283401 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:32:24.851883  283401 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:32:24.853309  283401 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:32:24.854477  283401 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:32:24.857066  283401 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:32:24.857319  283401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:32:24.886198  283401 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:32:24.886311  283401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:24.951320  283401 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:32:24.939733554 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:24.951440  283401 docker.go:307] overlay module found
	I0831 22:32:24.952985  283401 out.go:97] Using the docker driver based on user configuration
	I0831 22:32:24.953014  283401 start.go:297] selected driver: docker
	I0831 22:32:24.953021  283401 start.go:901] validating driver "docker" against <nil>
	I0831 22:32:24.953142  283401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:32:25.008015  283401 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-31 22:32:24.998608763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:32:25.010826  283401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:32:25.011219  283401 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:32:25.011382  283401 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:32:25.015620  283401 out.go:169] Using Docker driver with root privileges
	I0831 22:32:25.017970  283401 cni.go:84] Creating CNI manager for ""
	I0831 22:32:25.018022  283401 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0831 22:32:25.018036  283401 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0831 22:32:25.018354  283401 start.go:340] cluster config:
	{Name:download-only-030884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-030884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:32:25.020484  283401 out.go:97] Starting "download-only-030884" primary control-plane node in "download-only-030884" cluster
	I0831 22:32:25.020527  283401 cache.go:121] Beginning downloading kic base image for docker with crio
	I0831 22:32:25.022750  283401 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:32:25.022800  283401 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:25.023005  283401 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:32:25.046347  283401 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:32:25.046572  283401 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:32:25.046595  283401 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:32:25.046600  283401 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:32:25.046609  283401 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:32:25.078947  283401 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:25.078976  283401 cache.go:56] Caching tarball of preloaded images
	I0831 22:32:25.079140  283401 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0831 22:32:25.080705  283401 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 22:32:25.080749  283401 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0831 22:32:25.162431  283401 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0831 22:32:29.440566  283401 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0831 22:32:29.440674  283401 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-277799/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-030884 host does not exist
	  To start a cluster, run: "minikube start -p download-only-030884"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-030884
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-123480 --alsologtostderr --binary-mirror http://127.0.0.1:44745 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-123480" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-123480
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-926553
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-926553: exit status 85 (72.408748ms)

                                                
                                                
-- stdout --
	* Profile "addons-926553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-926553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-926553
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-926553: exit status 85 (78.829022ms)

                                                
                                                
-- stdout --
	* Profile "addons-926553" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-926553"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (207.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-926553 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-926553 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m27.611929507s)
--- PASS: TestAddons/Setup (207.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-926553 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-926553 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-nzlbh" [6f45695d-815e-4797-82ae-c32bff29f863] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004099441s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-926553
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-926553: (5.853398899s)
--- PASS: TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.263516ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-926553 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-926553 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [9125f43d-4250-407c-a99b-2fe4f74b422e] Pending
helpers_test.go:345: "task-pv-pod" [9125f43d-4250-407c-a99b-2fe4f74b422e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [9125f43d-4250-407c-a99b-2fe4f74b422e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00402659s
addons_test.go:590: (dbg) Run:  kubectl --context addons-926553 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-926553 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-926553 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-926553 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-926553 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-926553 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-926553 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [82d98b93-fd73-4236-802e-333199f51bf0] Pending
helpers_test.go:345: "task-pv-pod-restore" [82d98b93-fd73-4236-802e-333199f51bf0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [82d98b93-fd73-4236-802e-333199f51bf0] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004447317s
addons_test.go:632: (dbg) Run:  kubectl --context addons-926553 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-926553 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-926553 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.772746662s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-926553 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-926553 --alsologtostderr -v=1: (1.471325889s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-7bd4m" [1164ff09-0299-47dd-a2f4-314fbf7656f9] Pending
helpers_test.go:345: "headlamp-57fb76fcdb-7bd4m" [1164ff09-0299-47dd-a2f4-314fbf7656f9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-7bd4m" [1164ff09-0299-47dd-a2f4-314fbf7656f9] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004704652s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 addons disable headlamp --alsologtostderr -v=1: (5.821189422s)
--- PASS: TestAddons/parallel/Headlamp (18.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-nzrb6" [b31db60d-0b27-45db-bc2c-5455cc2c701d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003591565s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-926553
--- PASS: TestAddons/parallel/CloudSpanner (6.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-926553 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-926553 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [906362d0-538e-4b5c-9710-77a4fef8cdb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [906362d0-538e-4b5c-9710-77a4fef8cdb4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [906362d0-538e-4b5c-9710-77a4fef8cdb4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00404843s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-926553 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 ssh "cat /opt/local-path-provisioner/pvc-329ee4ba-4ee8-45f1-ba46-e92218961da0_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-926553 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-926553 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-9xvjf" [77f942fc-bc62-43bb-8ecc-dbe7e16cab48] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004733729s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-926553
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-866kn" [bff0632f-f6cb-4549-96f5-84ca0760978f] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005869924s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-926553 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-926553 addons disable yakd --alsologtostderr -v=1: (5.768376265s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-926553
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-926553: (11.917157818s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-926553
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-926553
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-926553
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (40.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-255069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-255069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.355199287s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-255069 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-255069 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-255069 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-255069" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-255069
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-255069: (2.068510898s)
--- PASS: TestCertOptions (40.13s)

                                                
                                    
x
+
TestCertExpiration (249.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-699725 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-699725 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.761671869s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-699725 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-699725 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.912782028s)
helpers_test.go:176: Cleaning up "cert-expiration-699725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-699725
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-699725: (2.808464371s)
--- PASS: TestCertExpiration (249.48s)

                                                
                                    
x
+
TestForceSystemdFlag (34.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-567224 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-567224 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.604495844s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-567224 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-567224" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-567224
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-567224: (2.416475824s)
--- PASS: TestForceSystemdFlag (34.31s)

                                                
                                    
x
+
TestForceSystemdEnv (39.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-545879 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0831 23:33:55.232985  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-545879 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.417531907s)
helpers_test.go:176: Cleaning up "force-systemd-env-545879" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-545879
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-545879: (2.849328173s)
--- PASS: TestForceSystemdEnv (39.27s)

                                                
                                    
x
+
TestErrorSpam/setup (34.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-726767 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-726767 --driver=docker  --container-runtime=crio
E0831 22:51:01.242113  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.249255  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.260680  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.282050  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.323454  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.404844  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.566399  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:01.888009  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:02.529715  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:03.811073  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:06.372776  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:51:11.494744  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-726767 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-726767 --driver=docker  --container-runtime=crio: (34.043390239s)
--- PASS: TestErrorSpam/setup (34.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 pause
E0831 22:51:21.736604  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 stop: (1.264453698s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-726767 --log_dir /tmp/nospam-726767 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-277799/.minikube/files/etc/test/nested/copy/283197/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0831 22:51:42.218116  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-499633 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (51.311281212s)
--- PASS: TestFunctional/serial/StartWithProxy (51.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --alsologtostderr -v=8
E0831 22:52:23.179482  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-499633 --alsologtostderr -v=8: (29.508065224s)
functional_test.go:663: soft start took 29.511346836s for "functional-499633" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-499633 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (11.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:3.1: (1.557008196s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:3.3: (8.752558361s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 cache add registry.k8s.io/pause:latest: (1.499082108s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (11.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-499633 /tmp/TestFunctionalserialCacheCmdcacheadd_local2016553886/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache add minikube-local-cache-test:functional-499633
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache delete minikube-local-cache-test:functional-499633
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-499633
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (317.831236ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 cache reload: (1.33933208s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 kubectl -- --context functional-499633 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-499633 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0831 22:53:45.101609  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-499633 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.674440751s)
functional_test.go:761: restart took 37.674553332s for "functional-499633" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-499633 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 logs: (1.692964969s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 logs --file /tmp/TestFunctionalserialLogsFileCmd3017214105/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 logs --file /tmp/TestFunctionalserialLogsFileCmd3017214105/001/logs.txt: (1.751784801s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-499633 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-499633
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-499633: exit status 115 (539.408096ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30324 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-499633 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 config get cpus: exit status 14 (78.080547ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 config get cpus: exit status 14 (74.286421ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-499633 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-499633 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 310785: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.901158ms)

                                                
                                                
-- stdout --
	* [functional-499633] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:54:27.294626  310548 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:54:27.294809  310548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:27.294821  310548 out.go:358] Setting ErrFile to fd 2...
	I0831 22:54:27.294828  310548 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:27.295080  310548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:54:27.295545  310548 out.go:352] Setting JSON to false
	I0831 22:54:27.296564  310548 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9416,"bootTime":1725135452,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:54:27.296637  310548 start.go:139] virtualization:  
	I0831 22:54:27.300361  310548 out.go:177] * [functional-499633] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:54:27.303714  310548 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:54:27.303730  310548 notify.go:220] Checking for updates...
	I0831 22:54:27.306536  310548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:54:27.309383  310548 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:54:27.313217  310548 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:54:27.316985  310548 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:54:27.320156  310548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:54:27.323452  310548 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:54:27.324003  310548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:54:27.358413  310548 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:54:27.358559  310548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:54:27.423826  310548 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:54:27.412938315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:54:27.423943  310548 docker.go:307] overlay module found
	I0831 22:54:27.426618  310548 out.go:177] * Using the docker driver based on existing profile
	I0831 22:54:27.429141  310548 start.go:297] selected driver: docker
	I0831 22:54:27.429168  310548 start.go:901] validating driver "docker" against &{Name:functional-499633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-499633 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:54:27.429281  310548 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:54:27.432454  310548 out.go:201] 
	W0831 22:54:27.435097  310548 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:54:27.437709  310548 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499633 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (229.572755ms)

                                                
                                                
-- stdout --
	* [functional-499633] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:54:27.081435  310503 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:54:27.081706  310503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:27.081742  310503 out.go:358] Setting ErrFile to fd 2...
	I0831 22:54:27.081763  310503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:54:27.082157  310503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:54:27.082617  310503 out.go:352] Setting JSON to false
	I0831 22:54:27.083625  310503 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9415,"bootTime":1725135452,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 22:54:27.083741  310503 start.go:139] virtualization:  
	I0831 22:54:27.087037  310503 out.go:177] * [functional-499633] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0831 22:54:27.090492  310503 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:54:27.090774  310503 notify.go:220] Checking for updates...
	I0831 22:54:27.101690  310503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:54:27.104468  310503 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 22:54:27.107184  310503 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 22:54:27.111007  310503 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:54:27.113827  310503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:54:27.117067  310503 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:54:27.117790  310503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:54:27.148841  310503 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:54:27.148948  310503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:54:27.218935  310503 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:54:27.209038738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:54:27.219155  310503 docker.go:307] overlay module found
	I0831 22:54:27.223902  310503 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0831 22:54:27.226402  310503 start.go:297] selected driver: docker
	I0831 22:54:27.226424  310503 start.go:901] validating driver "docker" against &{Name:functional-499633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-499633 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:54:27.226563  310503 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:54:27.229682  310503 out.go:201] 
	W0831 22:54:27.232326  310503 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:54:27.234906  310503 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-499633 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-499633 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-65d86f57f4-zgb74" [e19beb74-9385-4ebb-a695-aa3c5a3f767e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-connect-65d86f57f4-zgb74" [e19beb74-9385-4ebb-a695-aa3c5a3f767e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003832901s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31031
functional_test.go:1675: http://192.168.49.2:31031: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zgb74

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31031
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [8eeea570-4591-4047-9fc5-7730fb6ee7ad] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004072756s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-499633 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-499633 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-499633 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499633 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [648fd96a-0d48-46ea-bb73-c456bca7f8f7] Pending
helpers_test.go:345: "sp-pod" [648fd96a-0d48-46ea-bb73-c456bca7f8f7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [648fd96a-0d48-46ea-bb73-c456bca7f8f7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003462127s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-499633 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-499633 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-499633 delete -f testdata/storage-provisioner/pod.yaml: (1.120062636s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-499633 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [2fa767d4-2198-48f1-b9e3-8721fdd2f846] Pending
helpers_test.go:345: "sp-pod" [2fa767d4-2198-48f1-b9e3-8721fdd2f846] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:345: "sp-pod" [2fa767d4-2198-48f1-b9e3-8721fdd2f846] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003315114s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-499633 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh -n functional-499633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cp functional-499633:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd906264852/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh -n functional-499633 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh -n functional-499633 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/283197/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /etc/test/nested/copy/283197/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/283197.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /etc/ssl/certs/283197.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/283197.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /usr/share/ca-certificates/283197.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2831972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /etc/ssl/certs/2831972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2831972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /usr/share/ca-certificates/2831972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-499633 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "sudo systemctl is-active docker": exit status 1 (314.378591ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "sudo systemctl is-active containerd": exit status 1 (290.803393ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr] ...
helpers_test.go:509: unable to kill pid 308289: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-499633 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:345: "nginx-svc" [ee0badda-cf49-487d-a326-a16278a7f23f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx-svc" [ee0badda-cf49-487d-a326-a16278a7f23f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004891601s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-499633 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.63.190 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-499633 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-499633 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-499633 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-64b4f8f9ff-gw6f2" [73c764f7-fadd-4957-8401-d8ab5c861f96] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-64b4f8f9ff-gw6f2" [73c764f7-fadd-4957-8401-d8ab5c861f96] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004788459s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "425.690772ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "77.23268ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service list -o json
functional_test.go:1494: Took "609.304769ms" to run "out/minikube-linux-arm64 -p functional-499633 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "429.342434ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "99.941989ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31083
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdany-port727970782/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725144864313640116" to /tmp/TestFunctionalparallelMountCmdany-port727970782/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725144864313640116" to /tmp/TestFunctionalparallelMountCmdany-port727970782/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725144864313640116" to /tmp/TestFunctionalparallelMountCmdany-port727970782/001/test-1725144864313640116
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.59917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:54 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:54 test-1725144864313640116
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh cat /mount-9p/test-1725144864313640116
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-499633 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [9d2d1f13-24e9-4912-ada7-a92e9cd6f555] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [9d2d1f13-24e9-4912-ada7-a92e9cd6f555] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [9d2d1f13-24e9-4912-ada7-a92e9cd6f555] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.013015116s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-499633 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdany-port727970782/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31083
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdspecific-port2706270059/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (529.777731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdspecific-port2706270059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "sudo umount -f /mount-9p": exit status 1 (373.987937ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-499633 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdspecific-port2706270059/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T" /mount1: exit status 1 (892.515427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-499633 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4233207989/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 version -o=json --components: (1.001163004s)
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499633 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-499633
localhost/kicbase/echo-server:functional-499633
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499633 image ls --format short --alsologtostderr:
I0831 22:54:46.809970  313393 out.go:345] Setting OutFile to fd 1 ...
I0831 22:54:46.810089  313393 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:46.810100  313393 out.go:358] Setting ErrFile to fd 2...
I0831 22:54:46.810105  313393 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:46.810366  313393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
I0831 22:54:46.811088  313393 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:46.811226  313393 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:46.811748  313393 cli_runner.go:164] Run: docker container inspect functional-499633 --format={{.State.Status}}
I0831 22:54:46.840862  313393 ssh_runner.go:195] Run: systemctl --version
I0831 22:54:46.840917  313393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499633
I0831 22:54:46.864622  313393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/functional-499633/id_rsa Username:docker}
I0831 22:54:46.957371  313393 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499633 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-499633  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| localhost/minikube-local-cache-test     | functional-499633  | e8d99d4a3ac03 | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499633 image ls --format table --alsologtostderr:
I0831 22:54:47.150627  313462 out.go:345] Setting OutFile to fd 1 ...
I0831 22:54:47.150796  313462 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.150808  313462 out.go:358] Setting ErrFile to fd 2...
I0831 22:54:47.150813  313462 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.151603  313462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
I0831 22:54:47.152460  313462 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.152622  313462 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.153300  313462 cli_runner.go:164] Run: docker container inspect functional-499633 --format={{.State.Status}}
I0831 22:54:47.171923  313462 ssh_runner.go:195] Run: systemctl --version
I0831 22:54:47.171974  313462 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499633
I0831 22:54:47.190744  313462 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/functional-499633/id_rsa Username:docker}
I0831 22:54:47.294406  313462 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499633 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-499633"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernet
esui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io
/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size
":"92567005"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb1
8f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"e8d99d4a3ac03c60aa62f99b55adac4dd747d7aac86b42739643e0ed7ef26cca","repoDigests":["localhost/minikube-local-cache-test@sha256:aba3db60dd89e5492a05e6894b3e983cc0e3c904ceeaae02576f4944c999342d"],"repoTags":["localhost/minikube-local-cache-test:functional-499633"],"size":"3330"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c
503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/
library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499633 image ls --format json --alsologtostderr:
I0831 22:54:47.097926  313457 out.go:345] Setting OutFile to fd 1 ...
I0831 22:54:47.098115  313457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.098127  313457 out.go:358] Setting ErrFile to fd 2...
I0831 22:54:47.098132  313457 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.098366  313457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
I0831 22:54:47.099003  313457 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.099127  313457 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.099644  313457 cli_runner.go:164] Run: docker container inspect functional-499633 --format={{.State.Status}}
I0831 22:54:47.120983  313457 ssh_runner.go:195] Run: systemctl --version
I0831 22:54:47.121045  313457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499633
I0831 22:54:47.154693  313457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/functional-499633/id_rsa Username:docker}
I0831 22:54:47.248839  313457 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499633 image ls --format yaml --alsologtostderr:
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: e8d99d4a3ac03c60aa62f99b55adac4dd747d7aac86b42739643e0ed7ef26cca
repoDigests:
- localhost/minikube-local-cache-test@sha256:aba3db60dd89e5492a05e6894b3e983cc0e3c904ceeaae02576f4944c999342d
repoTags:
- localhost/minikube-local-cache-test:functional-499633
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-499633
size: "4788229"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499633 image ls --format yaml --alsologtostderr:
I0831 22:54:46.836774  313394 out.go:345] Setting OutFile to fd 1 ...
I0831 22:54:46.836928  313394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:46.836940  313394 out.go:358] Setting ErrFile to fd 2...
I0831 22:54:46.836946  313394 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:46.837268  313394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
I0831 22:54:46.837962  313394 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:46.838132  313394 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:46.838682  313394 cli_runner.go:164] Run: docker container inspect functional-499633 --format={{.State.Status}}
I0831 22:54:46.857929  313394 ssh_runner.go:195] Run: systemctl --version
I0831 22:54:46.857992  313394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499633
I0831 22:54:46.880133  313394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/functional-499633/id_rsa Username:docker}
I0831 22:54:46.982090  313394 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499633 ssh pgrep buildkitd: exit status 1 (276.304655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image build -t localhost/my-image:functional-499633 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 image build -t localhost/my-image:functional-499633 testdata/build --alsologtostderr: (3.122631985s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499633 image build -t localhost/my-image:functional-499633 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9d21a0b688e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-499633
--> db6afcd3cf2
Successfully tagged localhost/my-image:functional-499633
db6afcd3cf249cf5d0543b27a61455e1eea793cadb9a32939f49da9f7f999a3e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499633 image build -t localhost/my-image:functional-499633 testdata/build --alsologtostderr:
I0831 22:54:47.624768  313577 out.go:345] Setting OutFile to fd 1 ...
I0831 22:54:47.625546  313577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.625597  313577 out.go:358] Setting ErrFile to fd 2...
I0831 22:54:47.625618  313577 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:54:47.625926  313577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
I0831 22:54:47.626896  313577 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.627522  313577 config.go:182] Loaded profile config "functional-499633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0831 22:54:47.628129  313577 cli_runner.go:164] Run: docker container inspect functional-499633 --format={{.State.Status}}
I0831 22:54:47.644985  313577 ssh_runner.go:195] Run: systemctl --version
I0831 22:54:47.645037  313577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499633
I0831 22:54:47.661728  313577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/functional-499633/id_rsa Username:docker}
I0831 22:54:47.752906  313577 build_images.go:161] Building image from path: /tmp/build.3285484159.tar
I0831 22:54:47.752988  313577 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 22:54:47.762081  313577 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3285484159.tar
I0831 22:54:47.765513  313577 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3285484159.tar: stat -c "%s %y" /var/lib/minikube/build/build.3285484159.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3285484159.tar': No such file or directory
I0831 22:54:47.765542  313577 ssh_runner.go:362] scp /tmp/build.3285484159.tar --> /var/lib/minikube/build/build.3285484159.tar (3072 bytes)
I0831 22:54:47.793716  313577 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3285484159
I0831 22:54:47.802736  313577 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3285484159 -xf /var/lib/minikube/build/build.3285484159.tar
I0831 22:54:47.812138  313577 crio.go:315] Building image: /var/lib/minikube/build/build.3285484159
I0831 22:54:47.812257  313577 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-499633 /var/lib/minikube/build/build.3285484159 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0831 22:54:50.669773  313577 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-499633 /var/lib/minikube/build/build.3285484159 --cgroup-manager=cgroupfs: (2.857484299s)
I0831 22:54:50.669839  313577 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3285484159
I0831 22:54:50.678778  313577 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3285484159.tar
I0831 22:54:50.687828  313577 build_images.go:217] Built localhost/my-image:functional-499633 from /tmp/build.3285484159.tar
I0831 22:54:50.687859  313577 build_images.go:133] succeeded building to: functional-499633
I0831 22:54:50.687865  313577 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/08/31 22:54:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-499633
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image load --daemon kicbase/echo-server:functional-499633 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-499633 image load --daemon kicbase/echo-server:functional-499633 --alsologtostderr: (1.390387385s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image load --daemon kicbase/echo-server:functional-499633 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-499633
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image load --daemon kicbase/echo-server:functional-499633 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image save kicbase/echo-server:functional-499633 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image rm kicbase/echo-server:functional-499633 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-499633
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-499633 image save --daemon kicbase/echo-server:functional-499633 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-499633
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-499633
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-499633
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-499633
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-330867 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0831 22:56:01.242193  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:56:28.945421  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-330867 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m57.510321374s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-330867 -- rollout status deployment/busybox: (5.965359423s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-gfbwd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-j8jjz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-kj4qn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-gfbwd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-j8jjz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-kj4qn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-gfbwd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-j8jjz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-kj4qn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-gfbwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-gfbwd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-j8jjz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-j8jjz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-kj4qn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-330867 -- exec busybox-7dff88458-kj4qn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-330867 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-330867 -v=7 --alsologtostderr: (34.997249746s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-330867 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 status --output json -v=7 --alsologtostderr: (1.056758677s)
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp testdata/cp-test.txt ha-330867:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867:/home/docker/cp-test.txt ha-330867-m02:/home/docker/cp-test_ha-330867_ha-330867-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test_ha-330867_ha-330867-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867:/home/docker/cp-test.txt ha-330867-m03:/home/docker/cp-test_ha-330867_ha-330867-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test_ha-330867_ha-330867-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867:/home/docker/cp-test.txt ha-330867-m04:/home/docker/cp-test_ha-330867_ha-330867-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test_ha-330867_ha-330867-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp testdata/cp-test.txt ha-330867-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m02:/home/docker/cp-test.txt ha-330867:/home/docker/cp-test_ha-330867-m02_ha-330867.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test_ha-330867-m02_ha-330867.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m02:/home/docker/cp-test.txt ha-330867-m03:/home/docker/cp-test_ha-330867-m02_ha-330867-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test_ha-330867-m02_ha-330867-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m02:/home/docker/cp-test.txt ha-330867-m04:/home/docker/cp-test_ha-330867-m02_ha-330867-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test_ha-330867-m02_ha-330867-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp testdata/cp-test.txt ha-330867-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt ha-330867:/home/docker/cp-test_ha-330867-m03_ha-330867.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test_ha-330867-m03_ha-330867.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt ha-330867-m02:/home/docker/cp-test_ha-330867-m03_ha-330867-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test_ha-330867-m03_ha-330867-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m03:/home/docker/cp-test.txt ha-330867-m04:/home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test_ha-330867-m03_ha-330867-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp testdata/cp-test.txt ha-330867-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test.txt"
E0831 22:58:55.232853  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:55.239560  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:55.252507  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:55.274254  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:55.316062  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:58:55.397477  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4235394847/001/cp-test_ha-330867-m04.txt
E0831 22:58:55.559537  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test.txt"
E0831 22:58:55.881739  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt ha-330867:/home/docker/cp-test_ha-330867-m04_ha-330867.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test.txt"
E0831 22:58:56.523466  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867 "sudo cat /home/docker/cp-test_ha-330867-m04_ha-330867.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt ha-330867-m02:/home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test.txt"
E0831 22:58:57.805661  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m02 "sudo cat /home/docker/cp-test_ha-330867-m04_ha-330867-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 cp ha-330867-m04:/home/docker/cp-test.txt ha-330867-m03:/home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 ssh -n ha-330867-m03 "sudo cat /home/docker/cp-test_ha-330867-m04_ha-330867-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 node stop m02 -v=7 --alsologtostderr
E0831 22:59:00.367894  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:59:05.489996  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 node stop m02 -v=7 --alsologtostderr: (12.010281598s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr: exit status 7 (731.150021ms)

                                                
                                                
-- stdout --
	ha-330867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-330867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-330867-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-330867-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:59:11.372970  329328 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:59:11.373387  329328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:59:11.373422  329328 out.go:358] Setting ErrFile to fd 2...
	I0831 22:59:11.373444  329328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:59:11.373698  329328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 22:59:11.373923  329328 out.go:352] Setting JSON to false
	I0831 22:59:11.373981  329328 mustload.go:65] Loading cluster: ha-330867
	I0831 22:59:11.374086  329328 notify.go:220] Checking for updates...
	I0831 22:59:11.374451  329328 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 22:59:11.374493  329328 status.go:255] checking status of ha-330867 ...
	I0831 22:59:11.375332  329328 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 22:59:11.395343  329328 status.go:330] ha-330867 host status = "Running" (err=<nil>)
	I0831 22:59:11.395366  329328 host.go:66] Checking if "ha-330867" exists ...
	I0831 22:59:11.395697  329328 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867
	I0831 22:59:11.425799  329328 host.go:66] Checking if "ha-330867" exists ...
	I0831 22:59:11.426208  329328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:59:11.426262  329328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867
	I0831 22:59:11.446873  329328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867/id_rsa Username:docker}
	I0831 22:59:11.546982  329328 ssh_runner.go:195] Run: systemctl --version
	I0831 22:59:11.551846  329328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:59:11.564169  329328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:59:11.622259  329328 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-31 22:59:11.612905489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:59:11.622836  329328 kubeconfig.go:125] found "ha-330867" server: "https://192.168.49.254:8443"
	I0831 22:59:11.622869  329328 api_server.go:166] Checking apiserver status ...
	I0831 22:59:11.622916  329328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:59:11.634879  329328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1354/cgroup
	I0831 22:59:11.645221  329328 api_server.go:182] apiserver freezer: "7:freezer:/docker/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/crio/crio-3e3e80b2528e844945c3e6bbc8b2d5ae8d529e856af1b9e417435faf73d43351"
	I0831 22:59:11.645299  329328 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/db44dca62049d0d9134d666ca8fbf76da21abb37d9c21a41bddc9bfe1aa4f192/crio/crio-3e3e80b2528e844945c3e6bbc8b2d5ae8d529e856af1b9e417435faf73d43351/freezer.state
	I0831 22:59:11.654210  329328 api_server.go:204] freezer state: "THAWED"
	I0831 22:59:11.654237  329328 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:59:11.662270  329328 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:59:11.662300  329328 status.go:422] ha-330867 apiserver status = Running (err=<nil>)
	I0831 22:59:11.662312  329328 status.go:257] ha-330867 status: &{Name:ha-330867 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:59:11.662329  329328 status.go:255] checking status of ha-330867-m02 ...
	I0831 22:59:11.662645  329328 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 22:59:11.678570  329328 status.go:330] ha-330867-m02 host status = "Stopped" (err=<nil>)
	I0831 22:59:11.678591  329328 status.go:343] host is not running, skipping remaining checks
	I0831 22:59:11.678598  329328 status.go:257] ha-330867-m02 status: &{Name:ha-330867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:59:11.678619  329328 status.go:255] checking status of ha-330867-m03 ...
	I0831 22:59:11.678927  329328 cli_runner.go:164] Run: docker container inspect ha-330867-m03 --format={{.State.Status}}
	I0831 22:59:11.697132  329328 status.go:330] ha-330867-m03 host status = "Running" (err=<nil>)
	I0831 22:59:11.697160  329328 host.go:66] Checking if "ha-330867-m03" exists ...
	I0831 22:59:11.697491  329328 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m03
	I0831 22:59:11.713848  329328 host.go:66] Checking if "ha-330867-m03" exists ...
	I0831 22:59:11.714162  329328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:59:11.714205  329328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m03
	I0831 22:59:11.732767  329328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m03/id_rsa Username:docker}
	I0831 22:59:11.829971  329328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:59:11.842087  329328 kubeconfig.go:125] found "ha-330867" server: "https://192.168.49.254:8443"
	I0831 22:59:11.842115  329328 api_server.go:166] Checking apiserver status ...
	I0831 22:59:11.842157  329328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:59:11.853097  329328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1298/cgroup
	I0831 22:59:11.862684  329328 api_server.go:182] apiserver freezer: "7:freezer:/docker/14476693c61cf492a4566b01df603b97babca3261f03ab5cdf74c54a7fed399d/crio/crio-3dcea034d305a92058f86344c9a5e076fd302d812dba17e940011a5510f31baa"
	I0831 22:59:11.862753  329328 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/14476693c61cf492a4566b01df603b97babca3261f03ab5cdf74c54a7fed399d/crio/crio-3dcea034d305a92058f86344c9a5e076fd302d812dba17e940011a5510f31baa/freezer.state
	I0831 22:59:11.871819  329328 api_server.go:204] freezer state: "THAWED"
	I0831 22:59:11.871847  329328 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:59:11.881058  329328 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:59:11.881090  329328 status.go:422] ha-330867-m03 apiserver status = Running (err=<nil>)
	I0831 22:59:11.881101  329328 status.go:257] ha-330867-m03 status: &{Name:ha-330867-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:59:11.881120  329328 status.go:255] checking status of ha-330867-m04 ...
	I0831 22:59:11.881443  329328 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 22:59:11.897718  329328 status.go:330] ha-330867-m04 host status = "Running" (err=<nil>)
	I0831 22:59:11.897749  329328 host.go:66] Checking if "ha-330867-m04" exists ...
	I0831 22:59:11.898059  329328 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-330867")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-330867-m04
	I0831 22:59:11.915887  329328 host.go:66] Checking if "ha-330867-m04" exists ...
	I0831 22:59:11.916191  329328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:59:11.916237  329328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-330867-m04
	I0831 22:59:11.931935  329328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/ha-330867-m04/id_rsa Username:docker}
	I0831 22:59:12.029555  329328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:59:12.048818  329328 status.go:257] ha-330867-m04 status: &{Name:ha-330867-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 node start m02 -v=7 --alsologtostderr
E0831 22:59:15.731361  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 node start m02 -v=7 --alsologtostderr: (22.271012714s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
E0831 22:59:36.214548  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr: (1.423737497s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.640281158s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-330867 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-330867 -v=7 --alsologtostderr
E0831 23:00:17.176794  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-330867 -v=7 --alsologtostderr: (37.108044208s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-330867 --wait=true -v=7 --alsologtostderr
E0831 23:01:01.242227  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:01:39.098630  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-330867 --wait=true -v=7 --alsologtostderr: (2m48.493530638s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-330867
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (205.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 stop -v=7 --alsologtostderr
E0831 23:03:55.233315  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 stop -v=7 --alsologtostderr: (35.694291661s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr: exit status 7 (115.079885ms)

                                                
                                                
-- stdout --
	ha-330867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-330867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-330867-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:04:01.919109  344137 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:04:01.919243  344137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:04:01.919253  344137 out.go:358] Setting ErrFile to fd 2...
	I0831 23:04:01.919259  344137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:04:01.919496  344137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:04:01.919678  344137 out.go:352] Setting JSON to false
	I0831 23:04:01.919720  344137 mustload.go:65] Loading cluster: ha-330867
	I0831 23:04:01.919787  344137 notify.go:220] Checking for updates...
	I0831 23:04:01.920132  344137 config.go:182] Loaded profile config "ha-330867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:04:01.920145  344137 status.go:255] checking status of ha-330867 ...
	I0831 23:04:01.920668  344137 cli_runner.go:164] Run: docker container inspect ha-330867 --format={{.State.Status}}
	I0831 23:04:01.939724  344137 status.go:330] ha-330867 host status = "Stopped" (err=<nil>)
	I0831 23:04:01.939747  344137 status.go:343] host is not running, skipping remaining checks
	I0831 23:04:01.939754  344137 status.go:257] ha-330867 status: &{Name:ha-330867 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:04:01.939788  344137 status.go:255] checking status of ha-330867-m02 ...
	I0831 23:04:01.940106  344137 cli_runner.go:164] Run: docker container inspect ha-330867-m02 --format={{.State.Status}}
	I0831 23:04:01.958022  344137 status.go:330] ha-330867-m02 host status = "Stopped" (err=<nil>)
	I0831 23:04:01.958046  344137 status.go:343] host is not running, skipping remaining checks
	I0831 23:04:01.958053  344137 status.go:257] ha-330867-m02 status: &{Name:ha-330867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:04:01.958071  344137 status.go:255] checking status of ha-330867-m04 ...
	I0831 23:04:01.958373  344137 cli_runner.go:164] Run: docker container inspect ha-330867-m04 --format={{.State.Status}}
	I0831 23:04:01.982444  344137 status.go:330] ha-330867-m04 host status = "Stopped" (err=<nil>)
	I0831 23:04:01.982466  344137 status.go:343] host is not running, skipping remaining checks
	I0831 23:04:01.982473  344137 status.go:257] ha-330867-m04 status: &{Name:ha-330867-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-330867 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-330867 --control-plane -v=7 --alsologtostderr: (1m13.909327258s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-330867 status -v=7 --alsologtostderr: (1.03478863s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-572125 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-572125 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.372367333s)
--- PASS: TestJSONOutput/start/Command (49.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-572125 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-572125 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-572125 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-572125 --output=json --user=testUser: (5.869219232s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-679593 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-679593 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.996354ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8687537e-0dbe-4609-a528-7c1984ca826f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-679593] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92b0b8ad-244a-4dba-ac66-a7ec49c385df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"66060564-211f-46d6-8440-ab9fa8889bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fdb0c62b-3fb8-4bb7-989f-92f489bad355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig"}}
	{"specversion":"1.0","id":"75a2412d-22e4-43d7-a98a-5acf501ef1ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube"}}
	{"specversion":"1.0","id":"e4fff6a0-addb-400b-96ee-c186692be2ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"00b57285-a4a3-47c2-9184-cff0829179be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"945f21ad-1f8d-4e34-9680-803793483bda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-679593" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-679593
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-760081 --network=
E0831 23:08:55.233278  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-760081 --network=: (35.75150422s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-760081" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-760081
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-760081: (2.172052775s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-397858 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-397858 --network=bridge: (32.882503094s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-397858" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-397858
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-397858: (1.961825568s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.87s)

                                                
                                    
x
+
TestKicExistingNetwork (33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-224162 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-224162 --network=existing-network: (30.816243941s)
helpers_test.go:176: Cleaning up "existing-network-224162" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-224162
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-224162: (2.029294797s)
--- PASS: TestKicExistingNetwork (33.00s)

                                                
                                    
x
+
TestKicCustomSubnet (35.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-122913 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-122913 --subnet=192.168.60.0/24: (33.294555101s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-122913 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-122913" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-122913
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-122913: (2.108359224s)
--- PASS: TestKicCustomSubnet (35.43s)

                                                
                                    
x
+
TestKicStaticIP (38.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-411815 --static-ip=192.168.200.200
E0831 23:11:01.242567  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-411815 --static-ip=192.168.200.200: (36.423711385s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-411815 ip
helpers_test.go:176: Cleaning up "static-ip-411815" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-411815
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-411815: (2.144727278s)
--- PASS: TestKicStaticIP (38.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-729595 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-729595 --driver=docker  --container-runtime=crio: (31.26544449s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-732327 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-732327 --driver=docker  --container-runtime=crio: (34.872330417s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-729595
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-732327
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-732327" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-732327
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-732327: (2.151100437s)
helpers_test.go:176: Cleaning up "first-729595" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-729595
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-729595: (2.322441651s)
--- PASS: TestMinikubeProfile (71.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-255888 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-255888 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.385725723s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-255888 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-269836 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-269836 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.298185522s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-269836 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-255888 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-255888 --alsologtostderr -v=5: (1.629529481s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-269836 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-269836
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-269836: (1.214313099s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-269836
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-269836: (6.939314354s)
--- PASS: TestMountStart/serial/RestartStopped (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-269836 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/CreateExtnet (0.07s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/CreateExtnet
multinetwork_test.go:99: (dbg) Run:  docker network create network-extnet-715334
multinetwork_test.go:104: external network network-extnet-715334 created
--- PASS: TestContainerIPsMultiNetwork/serial/CreateExtnet (0.07s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/FreshStart (49.51s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/FreshStart
multinetwork_test.go:148: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-709164 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0831 23:13:55.233253  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
multinetwork_test.go:148: (dbg) Done: out/minikube-linux-arm64 start -p extnet-709164 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (49.484284613s)
multinetwork_test.go:161: cluster extnet-709164 started with address 192.168.67.2/
--- PASS: TestContainerIPsMultiNetwork/serial/FreshStart (49.51s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.1s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/ConnectExtnet
multinetwork_test.go:113: (dbg) Run:  docker network connect network-extnet-715334 extnet-709164
multinetwork_test.go:126: cluster extnet-709164 was attached to network network-extnet-715334 with address 172.18.0.2/
--- PASS: TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.10s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Stop (6.09s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Stop
multinetwork_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p extnet-709164 --alsologtostderr -v=5
multinetwork_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p extnet-709164 --alsologtostderr -v=5: (6.08499337s)
--- PASS: TestContainerIPsMultiNetwork/serial/Stop (6.09s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p extnet-709164 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p extnet-709164 --output=json --layout=cluster: exit status 7 (69.64492ms)

                                                
                                                
-- stdout --
	{"Name":"extnet-709164","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* 1 node stopped.","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":405,"StatusName":"Stopped"}},"Nodes":[{"Name":"extnet-709164","StatusCode":405,"StatusName":"Stopped","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Start (19.14s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Start
multinetwork_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-709164 --alsologtostderr -v=5
multinetwork_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p extnet-709164 --alsologtostderr -v=5: (19.104921488s)
--- PASS: TestContainerIPsMultiNetwork/serial/Start (19.14s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyNetworks
multinetwork_test.go:225: (dbg) Run:  docker inspect extnet-709164
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Delete (2.53s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Delete
multinetwork_test.go:253: (dbg) Run:  out/minikube-linux-arm64 delete -p extnet-709164 --alsologtostderr -v=5
multinetwork_test.go:253: (dbg) Done: out/minikube-linux-arm64 delete -p extnet-709164 --alsologtostderr -v=5: (2.531363434s)
--- PASS: TestContainerIPsMultiNetwork/serial/Delete (2.53s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.1s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/DeleteExtnet
multinetwork_test.go:136: (dbg) Run:  docker network rm network-extnet-715334
multinetwork_test.go:140: external network network-extnet-715334 deleted
--- PASS: TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.10s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyDeletedResources
multinetwork_test.go:263: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
multinetwork_test.go:289: (dbg) Run:  docker ps -a
multinetwork_test.go:294: (dbg) Run:  docker volume inspect extnet-709164
multinetwork_test.go:294: (dbg) Non-zero exit: docker volume inspect extnet-709164: exit status 1 (15.357ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get extnet-709164: no such volume

                                                
                                                
** /stderr **
multinetwork_test.go:299: (dbg) Run:  docker network ls
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-261658 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0831 23:15:18.302648  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-261658 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m22.106253201s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- rollout status deployment/busybox
E0831 23:16:01.241948  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-261658 -- rollout status deployment/busybox: (4.290941166s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-c7c4q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-nn6b8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-c7c4q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-nn6b8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-c7c4q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-nn6b8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-c7c4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-c7c4q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-nn6b8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-261658 -- exec busybox-7dff88458-nn6b8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-261658 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-261658 -v 3 --alsologtostderr: (29.092431218s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-261658 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --output json --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp testdata/cp-test.txt multinode-261658:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile970502749/001/cp-test_multinode-261658.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658:/home/docker/cp-test.txt multinode-261658-m02:/home/docker/cp-test_multinode-261658_multinode-261658-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test_multinode-261658_multinode-261658-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658:/home/docker/cp-test.txt multinode-261658-m03:/home/docker/cp-test_multinode-261658_multinode-261658-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test_multinode-261658_multinode-261658-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp testdata/cp-test.txt multinode-261658-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile970502749/001/cp-test_multinode-261658-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m02:/home/docker/cp-test.txt multinode-261658:/home/docker/cp-test_multinode-261658-m02_multinode-261658.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test_multinode-261658-m02_multinode-261658.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m02:/home/docker/cp-test.txt multinode-261658-m03:/home/docker/cp-test_multinode-261658-m02_multinode-261658-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test_multinode-261658-m02_multinode-261658-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp testdata/cp-test.txt multinode-261658-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile970502749/001/cp-test_multinode-261658-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m03:/home/docker/cp-test.txt multinode-261658:/home/docker/cp-test_multinode-261658-m03_multinode-261658.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658 "sudo cat /home/docker/cp-test_multinode-261658-m03_multinode-261658.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 cp multinode-261658-m03:/home/docker/cp-test.txt multinode-261658-m02:/home/docker/cp-test_multinode-261658-m03_multinode-261658-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 ssh -n multinode-261658-m02 "sudo cat /home/docker/cp-test_multinode-261658-m03_multinode-261658-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-261658 node stop m03: (1.217652813s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-261658 status: exit status 7 (512.973708ms)

                                                
                                                
-- stdout --
	multinode-261658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr: exit status 7 (536.5186ms)

                                                
                                                
-- stdout --
	multinode-261658
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-261658-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-261658-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:16:50.112991  404146 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:16:50.113361  404146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:16:50.113399  404146 out.go:358] Setting ErrFile to fd 2...
	I0831 23:16:50.113420  404146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:16:50.113938  404146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:16:50.114280  404146 out.go:352] Setting JSON to false
	I0831 23:16:50.114321  404146 mustload.go:65] Loading cluster: multinode-261658
	I0831 23:16:50.115108  404146 config.go:182] Loaded profile config "multinode-261658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:16:50.115130  404146 status.go:255] checking status of multinode-261658 ...
	I0831 23:16:50.115931  404146 cli_runner.go:164] Run: docker container inspect multinode-261658 --format={{.State.Status}}
	I0831 23:16:50.118255  404146 notify.go:220] Checking for updates...
	I0831 23:16:50.136956  404146 status.go:330] multinode-261658 host status = "Running" (err=<nil>)
	I0831 23:16:50.136987  404146 host.go:66] Checking if "multinode-261658" exists ...
	I0831 23:16:50.137302  404146 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-261658")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-261658
	I0831 23:16:50.159988  404146 host.go:66] Checking if "multinode-261658" exists ...
	I0831 23:16:50.160477  404146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:16:50.160561  404146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-261658
	I0831 23:16:50.185380  404146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/multinode-261658/id_rsa Username:docker}
	I0831 23:16:50.285700  404146 ssh_runner.go:195] Run: systemctl --version
	I0831 23:16:50.290120  404146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:16:50.301550  404146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:16:50.360756  404146 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-31 23:16:50.350414818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:16:50.361350  404146 kubeconfig.go:125] found "multinode-261658" server: "https://192.168.67.2:8443"
	I0831 23:16:50.361386  404146 api_server.go:166] Checking apiserver status ...
	I0831 23:16:50.361439  404146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 23:16:50.372486  404146 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I0831 23:16:50.381769  404146 api_server.go:182] apiserver freezer: "7:freezer:/docker/cb28f7ee7ea508bdb4d04741ba6163b7d857f52bdb7619882cf7eaef969d69c2/crio/crio-2d14914892ec033998855310391ce663e2004ca6bcf5dedde88cd5b95d5e0b57"
	I0831 23:16:50.381844  404146 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cb28f7ee7ea508bdb4d04741ba6163b7d857f52bdb7619882cf7eaef969d69c2/crio/crio-2d14914892ec033998855310391ce663e2004ca6bcf5dedde88cd5b95d5e0b57/freezer.state
	I0831 23:16:50.390847  404146 api_server.go:204] freezer state: "THAWED"
	I0831 23:16:50.390877  404146 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0831 23:16:50.399527  404146 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0831 23:16:50.399559  404146 status.go:422] multinode-261658 apiserver status = Running (err=<nil>)
	I0831 23:16:50.399572  404146 status.go:257] multinode-261658 status: &{Name:multinode-261658 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:16:50.399591  404146 status.go:255] checking status of multinode-261658-m02 ...
	I0831 23:16:50.399938  404146 cli_runner.go:164] Run: docker container inspect multinode-261658-m02 --format={{.State.Status}}
	I0831 23:16:50.418676  404146 status.go:330] multinode-261658-m02 host status = "Running" (err=<nil>)
	I0831 23:16:50.418721  404146 host.go:66] Checking if "multinode-261658-m02" exists ...
	I0831 23:16:50.419111  404146 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-261658")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-261658-m02
	I0831 23:16:50.438997  404146 host.go:66] Checking if "multinode-261658-m02" exists ...
	I0831 23:16:50.439304  404146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 23:16:50.439347  404146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-261658-m02
	I0831 23:16:50.456244  404146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33293 SSHKeyPath:/home/jenkins/minikube-integration/18943-277799/.minikube/machines/multinode-261658-m02/id_rsa Username:docker}
	I0831 23:16:50.553565  404146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 23:16:50.565190  404146 status.go:257] multinode-261658-m02 status: &{Name:multinode-261658-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:16:50.565228  404146 status.go:255] checking status of multinode-261658-m03 ...
	I0831 23:16:50.565576  404146 cli_runner.go:164] Run: docker container inspect multinode-261658-m03 --format={{.State.Status}}
	I0831 23:16:50.581921  404146 status.go:330] multinode-261658-m03 host status = "Stopped" (err=<nil>)
	I0831 23:16:50.581946  404146 status.go:343] host is not running, skipping remaining checks
	I0831 23:16:50.581954  404146 status.go:257] multinode-261658-m03 status: &{Name:multinode-261658-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-261658 node start m03 -v=7 --alsologtostderr: (9.002610005s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 -p multinode-261658 status -v=7 --alsologtostderr: (1.256033057s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-261658
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-261658
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-261658: (25.099936312s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-261658 --wait=true -v=8 --alsologtostderr
E0831 23:18:55.233680  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-261658 --wait=true -v=8 --alsologtostderr: (1m30.722127156s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-261658
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-261658 node delete m03: (4.905340975s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-261658 stop: (23.697982637s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-261658 status: exit status 7 (98.511183ms)

                                                
                                                
-- stdout --
	multinode-261658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr: exit status 7 (88.597703ms)

                                                
                                                
-- stdout --
	multinode-261658
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-261658-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:19:26.362999  411956 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:19:26.363156  411956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:19:26.363184  411956 out.go:358] Setting ErrFile to fd 2...
	I0831 23:19:26.363191  411956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:19:26.363466  411956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:19:26.363687  411956 out.go:352] Setting JSON to false
	I0831 23:19:26.363778  411956 mustload.go:65] Loading cluster: multinode-261658
	I0831 23:19:26.363855  411956 notify.go:220] Checking for updates...
	I0831 23:19:26.364242  411956 config.go:182] Loaded profile config "multinode-261658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:19:26.364264  411956 status.go:255] checking status of multinode-261658 ...
	I0831 23:19:26.365127  411956 cli_runner.go:164] Run: docker container inspect multinode-261658 --format={{.State.Status}}
	I0831 23:19:26.383143  411956 status.go:330] multinode-261658 host status = "Stopped" (err=<nil>)
	I0831 23:19:26.383170  411956 status.go:343] host is not running, skipping remaining checks
	I0831 23:19:26.383178  411956 status.go:257] multinode-261658 status: &{Name:multinode-261658 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 23:19:26.383211  411956 status.go:255] checking status of multinode-261658-m02 ...
	I0831 23:19:26.383584  411956 cli_runner.go:164] Run: docker container inspect multinode-261658-m02 --format={{.State.Status}}
	I0831 23:19:26.405284  411956 status.go:330] multinode-261658-m02 host status = "Stopped" (err=<nil>)
	I0831 23:19:26.405312  411956 status.go:343] host is not running, skipping remaining checks
	I0831 23:19:26.405324  411956 status.go:257] multinode-261658-m02 status: &{Name:multinode-261658-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-261658 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-261658 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.794283469s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-261658 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-261658
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-261658-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-261658-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.259382ms)

                                                
                                                
-- stdout --
	* [multinode-261658-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-261658-m02' is duplicated with machine name 'multinode-261658-m02' in profile 'multinode-261658'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-261658-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-261658-m03 --driver=docker  --container-runtime=crio: (35.011278227s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-261658
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-261658: exit status 80 (326.9365ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-261658 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-261658-m03 already exists in multinode-261658-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-261658-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-261658-m03: (1.956857321s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.44s)

                                                
                                    
x
+
TestPreload (132.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-183888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0831 23:21:01.242117  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-183888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m38.001009993s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-183888 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-183888 image pull gcr.io/k8s-minikube/busybox: (3.197946802s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-183888
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-183888: (5.881683978s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-183888 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-183888 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.411077714s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-183888 image list
helpers_test.go:176: Cleaning up "test-preload-183888" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-183888
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-183888: (2.377396929s)
--- PASS: TestPreload (132.19s)

                                                
                                    
x
+
TestScheduledStopUnix (105.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-508025 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-508025 --memory=2048 --driver=docker  --container-runtime=crio: (29.114841677s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508025 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-508025 -n scheduled-stop-508025
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508025 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508025 --cancel-scheduled
E0831 23:23:55.233268  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508025 -n scheduled-stop-508025
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-508025
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-508025 --schedule 15s
E0831 23:24:04.308547  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-508025
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-508025: exit status 7 (67.91025ms)

                                                
                                                
-- stdout --
	scheduled-stop-508025
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508025 -n scheduled-stop-508025
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-508025 -n scheduled-stop-508025: exit status 7 (64.783464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-508025" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-508025
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-508025: (4.325874409s)
--- PASS: TestScheduledStopUnix (105.01s)

                                                
                                    
x
+
TestInsufficientStorage (10.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-072925 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-072925 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.239771789s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"954694c2-fe62-472b-8be6-0b20e9382d78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-072925] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e73ad3cf-c4ff-424e-8e2b-766c7289266c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"a22d0c30-236e-404b-9395-ab5fc4592bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c17603a-0a71-4954-b16e-053662991797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig"}}
	{"specversion":"1.0","id":"2bdbf44f-1427-4563-aaec-133364840744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube"}}
	{"specversion":"1.0","id":"c5607a90-cb5a-4814-9c3c-3c3b89c3e5dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"29627828-cc99-4c60-96f8-90e511bedc53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b6e183d7-68d2-48e4-8f2d-2e76dd700b92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7162857d-98c8-4cdd-b2c4-d3425cc5b98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1168ca61-80a9-410f-b460-dcdf8a9357d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ccd7481-ddb2-4a94-918e-f1adbcd9195c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3a687e15-923d-4708-afa1-45a8b9e8407f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-072925\" primary control-plane node in \"insufficient-storage-072925\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5077604-051e-4439-8edf-e8a353a10492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724862063-19530 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2e769cb-9fe7-4d37-a232-32343f5f7847","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d03333a6-be66-42d3-a67a-d508465403bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-072925 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-072925 --output=json --layout=cluster: exit status 7 (310.774911ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-072925","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-072925","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:25:01.268195  429784 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-072925" does not appear in /home/jenkins/minikube-integration/18943-277799/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-072925 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-072925 --output=json --layout=cluster: exit status 7 (281.24458ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-072925","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-072925","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:25:01.553113  429848 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-072925" does not appear in /home/jenkins/minikube-integration/18943-277799/kubeconfig
	E0831 23:25:01.563754  429848 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/insufficient-storage-072925/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-072925" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-072925
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-072925: (1.889647719s)
--- PASS: TestInsufficientStorage (10.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4006869978 start -p running-upgrade-543291 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4006869978 start -p running-upgrade-543291 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.352246402s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-543291 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0831 23:31:01.241987  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-543291 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.293408185s)
helpers_test.go:176: Cleaning up "running-upgrade-543291" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-543291
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-543291: (2.860196045s)
--- PASS: TestRunningBinaryUpgrade (75.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m17.359019389s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-455889
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-455889: (1.955116949s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-455889 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-455889 status --format={{.Host}}: exit status 7 (89.218028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.563518319s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-455889 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (140.748215ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-455889] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-455889
	    minikube start -p kubernetes-upgrade-455889 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4558892 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-455889 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-455889 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.23981756s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-455889" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-455889
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-455889: (2.303127469s)
--- PASS: TestKubernetesUpgrade (389.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.96s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3492443199 start -p missing-upgrade-914645 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3492443199 start -p missing-upgrade-914645 --memory=2200 --driver=docker  --container-runtime=crio: (1m31.174297993s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-914645
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-914645: (10.388709225s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-914645
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-914645 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-914645 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.08572075s)
helpers_test.go:176: Cleaning up "missing-upgrade-914645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-914645
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-914645: (3.547601194s)
--- PASS: TestMissingContainerUpgrade (169.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (90.87554ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-976880] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-976880 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-976880 --driver=docker  --container-runtime=crio: (38.14341764s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-976880 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --driver=docker  --container-runtime=crio: (6.685858907s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-976880 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-976880 status -o json: exit status 2 (315.377828ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-976880","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-976880
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-976880: (1.955422819s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-976880 --no-kubernetes --driver=docker  --container-runtime=crio: (9.118180264s)
--- PASS: TestNoKubernetes/serial/Start (9.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-976880 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-976880 "sudo systemctl is-active --quiet service kubelet": exit status 1 (449.363156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
E0831 23:26:01.241723  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-976880
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-976880: (1.319159232s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-976880 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-976880 --driver=docker  --container-runtime=crio: (7.375789023s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-976880 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-976880 "sudo systemctl is-active --quiet service kubelet": exit status 1 (402.751768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2779865713 start -p stopped-upgrade-181117 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2779865713 start -p stopped-upgrade-181117 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.963631952s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2779865713 -p stopped-upgrade-181117 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2779865713 -p stopped-upgrade-181117 stop: (2.623848416s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-181117 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0831 23:28:55.233222  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-181117 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.431587487s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-181117
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-181117: (1.409606934s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (53.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587308 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0831 23:31:58.304082  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-587308 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.651661252s)
--- PASS: TestPause/serial/Start (53.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (124.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-587308 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-587308 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m4.85940562s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (124.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-034375 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-034375 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (184.521496ms)

                                                
                                                
-- stdout --
	* [false-034375] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 23:33:20.402447  469092 out.go:345] Setting OutFile to fd 1 ...
	I0831 23:33:20.402585  469092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:33:20.402596  469092 out.go:358] Setting ErrFile to fd 2...
	I0831 23:33:20.402603  469092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 23:33:20.403456  469092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-277799/.minikube/bin
	I0831 23:33:20.404006  469092 out.go:352] Setting JSON to false
	I0831 23:33:20.405024  469092 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11749,"bootTime":1725135452,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0831 23:33:20.405127  469092 start.go:139] virtualization:  
	I0831 23:33:20.408684  469092 out.go:177] * [false-034375] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 23:33:20.412099  469092 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 23:33:20.412235  469092 notify.go:220] Checking for updates...
	I0831 23:33:20.417516  469092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 23:33:20.420178  469092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-277799/kubeconfig
	I0831 23:33:20.422879  469092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-277799/.minikube
	I0831 23:33:20.425544  469092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 23:33:20.428254  469092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 23:33:20.431458  469092 config.go:182] Loaded profile config "pause-587308": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0831 23:33:20.431564  469092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 23:33:20.457278  469092 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 23:33:20.457414  469092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 23:33:20.522957  469092 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-31 23:33:20.512652474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 23:33:20.523082  469092 docker.go:307] overlay module found
	I0831 23:33:20.525885  469092 out.go:177] * Using the docker driver based on user configuration
	I0831 23:33:20.528691  469092 start.go:297] selected driver: docker
	I0831 23:33:20.528716  469092 start.go:901] validating driver "docker" against <nil>
	I0831 23:33:20.528731  469092 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 23:33:20.531981  469092 out.go:201] 
	W0831 23:33:20.534746  469092 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0831 23:33:20.537593  469092 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-034375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-587308
contexts:
- context:
cluster: pause-587308
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-587308
name: pause-587308
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-587308
user:
client-certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.crt
client-key: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-034375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-034375"

                                                
                                                
----------------------- debugLogs end: false-034375 [took: 3.282357538s] --------------------------------
helpers_test.go:176: Cleaning up "false-034375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-034375
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587308 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p pause-587308 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-587308 --output=json --layout=cluster: exit status 2 (309.229105ms)

                                                
                                                
-- stdout --
	{"Name":"pause-587308","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-587308","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-587308 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-587308 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-587308 --alsologtostderr -v=5: (1.223869193s)
--- PASS: TestPause/serial/PauseAgain (1.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-587308 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-587308 --alsologtostderr -v=5: (5.084463397s)
--- PASS: TestPause/serial/DeletePaused (5.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-587308
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-587308: exit status 1 (60.013982ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-587308: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (158.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-824643 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0831 23:36:01.241452  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-824643 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m38.169029839s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (158.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-824643 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [4c535cef-2373-48f2-9c09-f6fd7315f95a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [4c535cef-2373-48f2-9c09-f6fd7315f95a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.006444381s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-824643 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-824643 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-824643 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-824643 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-824643 --alsologtostderr -v=3: (12.055966947s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-824643 -n old-k8s-version-824643
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-824643 -n old-k8s-version-824643: exit status 7 (70.811291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-824643 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (153.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-824643 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-824643 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m33.415605228s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-824643 -n old-k8s-version-824643
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (153.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-188517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:38:55.233219  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-188517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m10.159995724s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-188517 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [8b15e4df-9314-4336-8ced-546bee4c7bec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [8b15e4df-9314-4336-8ced-546bee4c7bec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003664331s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-188517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-188517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-188517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031953868s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-188517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-188517 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-188517 --alsologtostderr -v=3: (12.019358675s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-188517 -n no-preload-188517
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-188517 -n no-preload-188517: exit status 7 (75.730744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-188517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (277.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-188517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-188517 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m37.298578353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-188517 -n no-preload-188517
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (277.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-v6ssf" [9eb9fbcb-a587-4497-b9ce-ad5d8c5a2096] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003852489s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-v6ssf" [9eb9fbcb-a587-4497-b9ce-ad5d8c5a2096] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005011139s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-824643 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-824643 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-824643 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-824643 -n old-k8s-version-824643
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-824643 -n old-k8s-version-824643: exit status 2 (334.796547ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-824643 -n old-k8s-version-824643
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-824643 -n old-k8s-version-824643: exit status 2 (315.155014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-824643 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-824643 -n old-k8s-version-824643
E0831 23:40:44.310077  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-824643 -n old-k8s-version-824643
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-782451 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:41:01.242213  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-782451 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m1.735167086s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782451 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [c3fb017a-f683-4542-879a-d1d985688784] Pending
helpers_test.go:345: "busybox" [c3fb017a-f683-4542-879a-d1d985688784] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [c3fb017a-f683-4542-879a-d1d985688784] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.052995875s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782451 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-782451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-782451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040392403s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-782451 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-782451 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-782451 --alsologtostderr -v=3: (12.056486418s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-782451 -n embed-certs-782451
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-782451 -n embed-certs-782451: exit status 7 (65.586726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-782451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-782451 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:42:32.685236  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:32.691719  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:32.703093  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:32.725218  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:32.766621  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:32.848035  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:33.009412  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:33.331094  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:33.972629  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:35.253985  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:37.815481  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:42.937185  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:42:53.179476  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:43:13.661089  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:43:54.623010  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:43:55.232759  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-782451 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m27.493051503s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-782451 -n embed-certs-782451
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-kcx8s" [aa54cfc2-9a98-4c65-9c6d-25239c094dd3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0033427s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-kcx8s" [aa54cfc2-9a98-4c65-9c6d-25239c094dd3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003976562s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-188517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-188517 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-188517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-188517 -n no-preload-188517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-188517 -n no-preload-188517: exit status 2 (348.210155ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-188517 -n no-preload-188517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-188517 -n no-preload-188517: exit status 2 (333.566463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-188517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-188517 -n no-preload-188517
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-188517 -n no-preload-188517
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-962859 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:45:16.544486  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-962859 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (49.491282563s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-962859 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [74fbfda4-6432-4d1b-99a7-e2984657f7dc] Pending
helpers_test.go:345: "busybox" [74fbfda4-6432-4d1b-99a7-e2984657f7dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [74fbfda4-6432-4d1b-99a7-e2984657f7dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003863982s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-962859 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-962859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-962859 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-962859 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-962859 --alsologtostderr -v=3: (12.001454786s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859: exit status 7 (87.42615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-962859 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-962859 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:46:01.241507  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-962859 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m49.390620966s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
E0831 23:50:48.935017  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-8ssvm" [b13c456c-3e94-4938-b64b-2d7a76d05324] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004255537s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-8ssvm" [b13c456c-3e94-4938-b64b-2d7a76d05324] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004234286s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-782451 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-782451 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-782451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-782451 -n embed-certs-782451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-782451 -n embed-certs-782451: exit status 2 (350.033034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-782451 -n embed-certs-782451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-782451 -n embed-certs-782451: exit status 2 (314.04969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-782451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-782451 -n embed-certs-782451
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-782451 -n embed-certs-782451
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-116054 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0831 23:47:32.684596  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-116054 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (35.184488192s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-116054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-116054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.263683332s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-116054 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-116054 --alsologtostderr -v=3: (1.30105294s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116054 -n newest-cni-116054
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116054 -n newest-cni-116054: exit status 7 (68.561362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-116054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-116054 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-116054 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (18.16079709s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-116054 -n newest-cni-116054
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-116054 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-116054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116054 -n newest-cni-116054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116054 -n newest-cni-116054: exit status 2 (376.838343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116054 -n newest-cni-116054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116054 -n newest-cni-116054: exit status 2 (352.396545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-116054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-116054 -n newest-cni-116054
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-116054 -n newest-cni-116054
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0831 23:48:38.306116  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:48:55.233347  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.69220282s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-ld48p" [15d04cf4-4dc8-4a87-b9e0-da0a0a5eb48a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-ld48p" [15d04cf4-4dc8-4a87-b9e0-da0a0a5eb48a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003218374s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0831 23:49:29.560662  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:49:32.122021  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:49:37.243761  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:49:47.485962  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:50:07.967286  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (52.990248858s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:345: "kindnet-wkkv9" [8fad0d89-7c8d-46b9-b833-d310387b200a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004085087s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-5h6pd" [6524946a-1379-4e1f-8191-8f634b7856e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-5h6pd" [6524946a-1379-4e1f-8191-8f634b7856e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003941673s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-fs9px" [aa673b8d-1fe6-42c7-916d-f6ab1098bbd2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004361091s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-fs9px" [aa673b8d-1fe6-42c7-916d-f6ab1098bbd2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005070183s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-962859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-962859 image list --format=json
E0831 23:51:01.242441  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-962859 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-962859 --alsologtostderr -v=1: (1.099275846s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859: exit status 2 (414.113326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859: exit status 2 (425.760288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-962859 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-962859 -n default-k8s-diff-port-962859
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)
E0831 23:54:54.701437  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:19.106502  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.143246  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.149686  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.160999  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.182462  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.223839  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.305192  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.466741  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:22.788307  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:23.429838  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:24.711791  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:27.274127  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:32.395874  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.698689  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.705199  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.716712  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.738259  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.779697  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:35.861257  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:36.023118  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:36.344885  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:36.986953  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:38.268718  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:40.830710  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:42.638040  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:45.952933  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:55:56.194915  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/default-k8s-diff-port-962859/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.430853689s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0831 23:52:10.859205  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/no-preload-188517/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.268326816s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-cb7wm" [2c05c01b-dc77-4ffd-90e4-05b8895b4327] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-cb7wm" [2c05c01b-dc77-4ffd-90e4-05b8895b4327] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006634416s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:345: "calico-node-5xvlf" [52f15797-915c-42d3-a5b2-6c5402043a8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004257399s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-28jrl" [3cdf18b2-d048-41cb-be4d-39962d6a2046] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-28jrl" [3cdf18b2-d048-41cb-be4d-39962d6a2046] Running
E0831 23:52:32.684857  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/old-k8s-version-824643/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006049162s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.437702867s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0831 23:53:55.233318  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/functional-499633/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.166975  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.173379  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.184805  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.206338  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.247922  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.329355  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.491022  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:57.812618  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:58.454123  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:53:59.736379  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:54:02.298604  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.809248488s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:345: "kube-flannel-ds-mgpw9" [d5243f1e-c663-48d2-9312-2bf9f4f6268e] Running
E0831 23:54:07.420422  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004285532s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-hlkqf" [3b006164-6bc9-45e6-9a15-7a1ded90b112] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-hlkqf" [3b006164-6bc9-45e6-9a15-7a1ded90b112] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005430619s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-4hc6b" [e26fe5fc-3e2e-4ec9-af60-3f3d760e828a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:54:17.662569  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/auto-034375/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-4hc6b" [e26fe5fc-3e2e-4ec9-af60-3f3d760e828a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004201091s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-034375 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.117340813s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-034375 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-034375 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-vncz7" [eb666357-b124-4a96-ad40-d90980cc2890] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:56:01.242442  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/addons-926553/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:56:03.120296  283197 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/kindnet-034375/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-vncz7" [eb666357-b124-4a96-ad40-d90980cc2890] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004265811s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-034375 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-034375 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (30/338)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-718632 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-718632" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-718632
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-286135" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-286135
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-034375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-587308
contexts:
- context:
cluster: pause-587308
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-587308
name: pause-587308
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-587308
user:
client-certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.crt
client-key: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-034375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-034375"

                                                
                                                
----------------------- debugLogs end: kubenet-034375 [took: 3.37500505s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-034375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-034375
--- SKIP: TestNetworkPlugins/group/kubenet (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-034375 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-034375" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18943-277799/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: pause-587308
contexts:
- context:
cluster: pause-587308
extensions:
- extension:
last-update: Sat, 31 Aug 2024 23:31:42 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-587308
name: pause-587308
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-587308
user:
client-certificate: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.crt
client-key: /home/jenkins/minikube-integration/18943-277799/.minikube/profiles/pause-587308/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-034375

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-034375" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-034375"

                                                
                                                
----------------------- debugLogs end: cilium-034375 [took: 3.705838667s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-034375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-034375
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard