Test Report: Docker_Linux_crio_arm64 19423

                    
                      74b5ac7e1cfb7233a98e35daf2ce49e3acb00be2:2024-08-19:35861
                    
                

Test fail (3/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 154.24
36 TestAddons/parallel/MetricsServer 321.2
174 TestMultiControlPlane/serial/RestartCluster 137.21
x
+
TestAddons/parallel/Ingress (154.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-199708 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-199708 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-199708 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a71f5760-62fd-477f-bb99-3bb85f47a3d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a71f5760-62fd-477f-bb99-3bb85f47a3d4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003548663s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-199708 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.254441531s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-199708 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable ingress-dns --alsologtostderr -v=1: (1.373043048s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable ingress --alsologtostderr -v=1: (7.774146508s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-199708
helpers_test.go:235: (dbg) docker inspect addons-199708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce",
	        "Created": "2024-08-19T20:21:51.051185979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1012739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T20:21:51.219238715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/hostname",
	        "HostsPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/hosts",
	        "LogPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce-json.log",
	        "Name": "/addons-199708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-199708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-199708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0-init/diff:/var/lib/docker/overlay2/9477ca3f94c975b8a19e34c7e6e216a8aaa21d9134153e903eb7147c449f54f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-199708",
	                "Source": "/var/lib/docker/volumes/addons-199708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-199708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-199708",
	                "name.minikube.sigs.k8s.io": "addons-199708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95f05de878bc3f5c51f783c6d692670d63dbaa5d2bcaca44505ae6ea419adcd3",
	            "SandboxKey": "/var/run/docker/netns/95f05de878bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-199708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a244ccc75e1fe53096da053ca8b3a7ce793a2735388b362ea1751023a3492c18",
	                    "EndpointID": "391283ea82d8c3c176cb8eae0c738159e9779721da593ca6147cd8d7e6205e01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-199708",
	                        "be074196787c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-199708 -n addons-199708
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 logs -n 25: (1.440062559s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-983156                                                                     | download-only-983156   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-295909 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | download-docker-295909                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-295909                                                                   | download-docker-295909 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-526736   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | binary-mirror-526736                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38541                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-526736                                                                     | binary-mirror-526736   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-199708 --wait=true                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-199708 ip                                                                            | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-199708 addons                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | -p addons-199708                                                                            |                        |         |         |                     |                     |
	| addons  | addons-199708 addons                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-199708 ssh cat                                                                       | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | /opt/local-path-provisioner/pvc-da75018b-e55e-4bcd-afd0-fef3a5381dbe_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | -p addons-199708                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC | 19 Aug 24 20:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC | 19 Aug 24 20:26 UTC |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-199708 ssh curl -s                                                                   | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-199708 ip                                                                            | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:26.429759 1012241 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:26.429924 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:26.429935 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:26.429940 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:26.430197 1012241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:21:26.430635 1012241 out.go:352] Setting JSON to false
	I0819 20:21:26.431503 1012241 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14628,"bootTime":1724084259,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:21:26.431578 1012241 start.go:139] virtualization:  
	I0819 20:21:26.434266 1012241 out.go:177] * [addons-199708] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:21:26.436128 1012241 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:21:26.436319 1012241 notify.go:220] Checking for updates...
	I0819 20:21:26.441184 1012241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:26.443237 1012241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:21:26.445245 1012241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:21:26.447156 1012241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:21:26.449128 1012241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:21:26.451618 1012241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:26.482653 1012241 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:26.482772 1012241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:26.537352 1012241 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:26.527799225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:26.537484 1012241 docker.go:307] overlay module found
	I0819 20:21:26.539067 1012241 out.go:177] * Using the docker driver based on user configuration
	I0819 20:21:26.540373 1012241 start.go:297] selected driver: docker
	I0819 20:21:26.540387 1012241 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:26.540417 1012241 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:21:26.541069 1012241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:26.595215 1012241 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:26.585358474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:26.595412 1012241 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:26.595695 1012241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:21:26.597034 1012241 out.go:177] * Using Docker driver with root privileges
	I0819 20:21:26.598283 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:21:26.598325 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:26.598355 1012241 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:26.598463 1012241 start.go:340] cluster config:
	{Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:26.600294 1012241 out.go:177] * Starting "addons-199708" primary control-plane node in "addons-199708" cluster
	I0819 20:21:26.601718 1012241 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:21:26.603152 1012241 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:26.605033 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:26.605106 1012241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 20:21:26.605121 1012241 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:26.605125 1012241 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:26.605207 1012241 preload.go:172] Found /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 20:21:26.605217 1012241 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:21:26.605561 1012241 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json ...
	I0819 20:21:26.605584 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json: {Name:mk4982306a6c220b260448cb6dfbfeaf94699ae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:26.621094 1012241 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:26.621236 1012241 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:26.621262 1012241 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:21:26.621270 1012241 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:21:26.621279 1012241 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:21:26.621290 1012241 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:21:43.932251 1012241 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:21:43.932292 1012241 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:21:43.932350 1012241 start.go:360] acquireMachinesLock for addons-199708: {Name:mk6c9c0160326aa0c0af4593d4c9c99fe90593b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:21:43.932983 1012241 start.go:364] duration metric: took 604.181µs to acquireMachinesLock for "addons-199708"
	I0819 20:21:43.933025 1012241 start.go:93] Provisioning new machine with config: &{Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:21:43.933112 1012241 start.go:125] createHost starting for "" (driver="docker")
	I0819 20:21:43.935165 1012241 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 20:21:43.935406 1012241 start.go:159] libmachine.API.Create for "addons-199708" (driver="docker")
	I0819 20:21:43.935442 1012241 client.go:168] LocalClient.Create starting
	I0819 20:21:43.935549 1012241 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem
	I0819 20:21:44.250894 1012241 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem
	I0819 20:21:44.795690 1012241 cli_runner.go:164] Run: docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 20:21:44.810659 1012241 cli_runner.go:211] docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 20:21:44.810747 1012241 network_create.go:284] running [docker network inspect addons-199708] to gather additional debugging logs...
	I0819 20:21:44.810769 1012241 cli_runner.go:164] Run: docker network inspect addons-199708
	W0819 20:21:44.826106 1012241 cli_runner.go:211] docker network inspect addons-199708 returned with exit code 1
	I0819 20:21:44.826139 1012241 network_create.go:287] error running [docker network inspect addons-199708]: docker network inspect addons-199708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-199708 not found
	I0819 20:21:44.826152 1012241 network_create.go:289] output of [docker network inspect addons-199708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-199708 not found
	
	** /stderr **
	I0819 20:21:44.826254 1012241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:44.841850 1012241 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004ae0e0}
	I0819 20:21:44.841894 1012241 network_create.go:124] attempt to create docker network addons-199708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 20:21:44.841949 1012241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-199708 addons-199708
	I0819 20:21:44.917828 1012241 network_create.go:108] docker network addons-199708 192.168.49.0/24 created
	I0819 20:21:44.917862 1012241 kic.go:121] calculated static IP "192.168.49.2" for the "addons-199708" container
	I0819 20:21:44.917937 1012241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 20:21:44.932136 1012241 cli_runner.go:164] Run: docker volume create addons-199708 --label name.minikube.sigs.k8s.io=addons-199708 --label created_by.minikube.sigs.k8s.io=true
	I0819 20:21:44.947807 1012241 oci.go:103] Successfully created a docker volume addons-199708
	I0819 20:21:44.947901 1012241 cli_runner.go:164] Run: docker run --rm --name addons-199708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --entrypoint /usr/bin/test -v addons-199708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 20:21:46.932792 1012241 cli_runner.go:217] Completed: docker run --rm --name addons-199708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --entrypoint /usr/bin/test -v addons-199708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.984844761s)
	I0819 20:21:46.932834 1012241 oci.go:107] Successfully prepared a docker volume addons-199708
	I0819 20:21:46.932859 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:46.932878 1012241 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 20:21:46.932974 1012241 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 20:21:50.979674 1012241 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.046656802s)
	I0819 20:21:50.979708 1012241 kic.go:203] duration metric: took 4.04682627s to extract preloaded images to volume ...
	W0819 20:21:50.979843 1012241 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 20:21:50.979961 1012241 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 20:21:51.035315 1012241 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-199708 --name addons-199708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-199708 --network addons-199708 --ip 192.168.49.2 --volume addons-199708:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 20:21:51.376885 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Running}}
	I0819 20:21:51.401666 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:51.429278 1012241 cli_runner.go:164] Run: docker exec addons-199708 stat /var/lib/dpkg/alternatives/iptables
	I0819 20:21:51.491656 1012241 oci.go:144] the created container "addons-199708" has a running status.
	I0819 20:21:51.491690 1012241 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa...
	I0819 20:21:52.279538 1012241 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 20:21:52.298675 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:52.317324 1012241 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 20:21:52.317350 1012241 kic_runner.go:114] Args: [docker exec --privileged addons-199708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 20:21:52.385754 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:52.407521 1012241 machine.go:93] provisionDockerMachine start ...
	I0819 20:21:52.407619 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.425983 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.426266 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.426282 1012241 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:21:52.557270 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-199708
	
	I0819 20:21:52.557341 1012241 ubuntu.go:169] provisioning hostname "addons-199708"
	I0819 20:21:52.557450 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.574183 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.574432 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.574451 1012241 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-199708 && echo "addons-199708" | sudo tee /etc/hostname
	I0819 20:21:52.718325 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-199708
	
	I0819 20:21:52.718408 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.736630 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.736892 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.736917 1012241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-199708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-199708/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-199708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:21:52.866022 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:21:52.866055 1012241 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1006087/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1006087/.minikube}
	I0819 20:21:52.866081 1012241 ubuntu.go:177] setting up certificates
	I0819 20:21:52.866092 1012241 provision.go:84] configureAuth start
	I0819 20:21:52.866158 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:52.883238 1012241 provision.go:143] copyHostCerts
	I0819 20:21:52.883324 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem (1123 bytes)
	I0819 20:21:52.883446 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem (1675 bytes)
	I0819 20:21:52.883505 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem (1082 bytes)
	I0819 20:21:52.883557 1012241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem org=jenkins.addons-199708 san=[127.0.0.1 192.168.49.2 addons-199708 localhost minikube]
	I0819 20:21:53.382790 1012241 provision.go:177] copyRemoteCerts
	I0819 20:21:53.382857 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:21:53.382927 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.399683 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:53.494692 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 20:21:53.519713 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:21:53.544807 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 20:21:53.570081 1012241 provision.go:87] duration metric: took 703.974684ms to configureAuth
	I0819 20:21:53.570115 1012241 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:21:53.570336 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:21:53.570462 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.586760 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:53.587008 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:53.587029 1012241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:21:53.822303 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:21:53.822330 1012241 machine.go:96] duration metric: took 1.414788015s to provisionDockerMachine
	I0819 20:21:53.822340 1012241 client.go:171] duration metric: took 9.886889796s to LocalClient.Create
	I0819 20:21:53.822393 1012241 start.go:167] duration metric: took 9.886987084s to libmachine.API.Create "addons-199708"
	I0819 20:21:53.822407 1012241 start.go:293] postStartSetup for "addons-199708" (driver="docker")
	I0819 20:21:53.822418 1012241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:21:53.822527 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:21:53.822590 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.840701 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:53.935316 1012241 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:21:53.938664 1012241 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:21:53.938702 1012241 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:21:53.938716 1012241 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:21:53.938723 1012241 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:21:53.938735 1012241 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/addons for local assets ...
	I0819 20:21:53.938807 1012241 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/files for local assets ...
	I0819 20:21:53.938832 1012241 start.go:296] duration metric: took 116.418837ms for postStartSetup
	I0819 20:21:53.939164 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:53.955331 1012241 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json ...
	I0819 20:21:53.955643 1012241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:21:53.955712 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.972244 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.063316 1012241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:21:54.068821 1012241 start.go:128] duration metric: took 10.135692886s to createHost
	I0819 20:21:54.068845 1012241 start.go:83] releasing machines lock for "addons-199708", held for 10.135840996s
	I0819 20:21:54.068929 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:54.086194 1012241 ssh_runner.go:195] Run: cat /version.json
	I0819 20:21:54.086252 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:54.086332 1012241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:21:54.086408 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:54.109233 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.119601 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.330914 1012241 ssh_runner.go:195] Run: systemctl --version
	I0819 20:21:54.335254 1012241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:21:54.477169 1012241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:21:54.482263 1012241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:21:54.504353 1012241 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:21:54.504451 1012241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:21:54.542327 1012241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 20:21:54.542350 1012241 start.go:495] detecting cgroup driver to use...
	I0819 20:21:54.542386 1012241 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:21:54.542449 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:21:54.558492 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:21:54.569776 1012241 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:21:54.569876 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:21:54.583585 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:21:54.598293 1012241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:21:54.679835 1012241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:21:54.780885 1012241 docker.go:233] disabling docker service ...
	I0819 20:21:54.780998 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:21:54.802942 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:21:54.815692 1012241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:21:54.899204 1012241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:21:54.988805 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:21:55.001351 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:21:55.041134 1012241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:21:55.041214 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.053990 1012241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:21:55.054124 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.065775 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.077839 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.090609 1012241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:21:55.101947 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.113984 1012241 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.132795 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.143464 1012241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:21:55.152717 1012241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:21:55.161673 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:55.243931 1012241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:21:55.365971 1012241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:21:55.366103 1012241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:21:55.369933 1012241 start.go:563] Will wait 60s for crictl version
	I0819 20:21:55.370049 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:21:55.373677 1012241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:21:55.418660 1012241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 20:21:55.418817 1012241 ssh_runner.go:195] Run: crio --version
	I0819 20:21:55.462160 1012241 ssh_runner.go:195] Run: crio --version
	I0819 20:21:55.504151 1012241 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 20:21:55.506896 1012241 cli_runner.go:164] Run: docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:55.522935 1012241 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:21:55.526662 1012241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:55.537914 1012241 kubeadm.go:883] updating cluster {Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:21:55.538045 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:55.538106 1012241 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:55.614772 1012241 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:21:55.614798 1012241 crio.go:433] Images already preloaded, skipping extraction
	I0819 20:21:55.614863 1012241 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:55.658355 1012241 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:21:55.658378 1012241 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:21:55.658388 1012241 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 20:21:55.658494 1012241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-199708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:21:55.658578 1012241 ssh_runner.go:195] Run: crio config
	I0819 20:21:55.706254 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:21:55.706278 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:55.706291 1012241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:21:55.706343 1012241 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-199708 NodeName:addons-199708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:21:55.706512 1012241 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-199708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:21:55.706586 1012241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:21:55.715428 1012241 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:21:55.715526 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:21:55.724240 1012241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 20:21:55.742720 1012241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:21:55.760589 1012241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 20:21:55.779008 1012241 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 20:21:55.782677 1012241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:55.793406 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:55.882133 1012241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:21:55.896172 1012241 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708 for IP: 192.168.49.2
	I0819 20:21:55.896241 1012241 certs.go:194] generating shared ca certs ...
	I0819 20:21:55.896274 1012241 certs.go:226] acquiring lock for ca certs: {Name:mka0619a4a0da3f790025b70d844d99358d748e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.896435 1012241 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key
	I0819 20:21:56.308101 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt ...
	I0819 20:21:56.308137 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt: {Name:mk16233753a16be3afb1d9ab0b22ac21b265489c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:56.308798 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key ...
	I0819 20:21:56.308815 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key: {Name:mke7a5da15253b7a448fe87628f984b4e0e6c17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:56.308911 1012241 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key
	I0819 20:21:57.087300 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt ...
	I0819 20:21:57.087332 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt: {Name:mk107d2da75913e05f292352bb957802fe834044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.087991 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key ...
	I0819 20:21:57.088045 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key: {Name:mk4dd8e1a67e76977a6797072eacca1d96cb43c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.092231 1012241 certs.go:256] generating profile certs ...
	I0819 20:21:57.092394 1012241 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key
	I0819 20:21:57.092432 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt with IP's: []
	I0819 20:21:57.771472 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt ...
	I0819 20:21:57.771509 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: {Name:mk1c2d35b33baec32c6203a2c13726cd6d4387a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.774483 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key ...
	I0819 20:21:57.774507 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key: {Name:mke10ffeaf88ab9095075d3e1e57386d96745e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.775070 1012241 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0
	I0819 20:21:57.775094 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 20:21:57.969392 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 ...
	I0819 20:21:57.969425 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0: {Name:mkb0faeb8ba4b2865b55c94e3f37afd3dd19a23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.969642 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0 ...
	I0819 20:21:57.969659 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0: {Name:mk4a9645d963042e194024d45aa216d82aed2544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.969754 1012241 certs.go:381] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt
	I0819 20:21:57.969833 1012241 certs.go:385] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0 -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key
	I0819 20:21:57.969889 1012241 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key
	I0819 20:21:57.969906 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt with IP's: []
	I0819 20:21:58.812910 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt ...
	I0819 20:21:58.812945 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt: {Name:mkc1d5c82a652223a3d7b19df127f6a13fd3a426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:58.813134 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key ...
	I0819 20:21:58.813149 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key: {Name:mk3cd1f973f125beee0d9d76964cb35efde0f800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:58.822382 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 20:21:58.822457 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:21:58.822485 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:21:58.822513 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem (1675 bytes)
	I0819 20:21:58.823218 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:21:58.849151 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:21:58.875664 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:21:58.900757 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:21:58.925956 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 20:21:58.952179 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:21:58.978023 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:21:59.005518 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:21:59.031576 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:21:59.056910 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:21:59.075851 1012241 ssh_runner.go:195] Run: openssl version
	I0819 20:21:59.081683 1012241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:21:59.091679 1012241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.095435 1012241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.095526 1012241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.102804 1012241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:21:59.112617 1012241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:21:59.116165 1012241 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 20:21:59.116240 1012241 kubeadm.go:392] StartCluster: {Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:59.116333 1012241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:21:59.116397 1012241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:21:59.154981 1012241 cri.go:89] found id: ""
	I0819 20:21:59.155053 1012241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:21:59.164138 1012241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:21:59.173169 1012241 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 20:21:59.173258 1012241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:21:59.182444 1012241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:21:59.182467 1012241 kubeadm.go:157] found existing configuration files:
	
	I0819 20:21:59.182525 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:21:59.191920 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:21:59.192012 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:21:59.200366 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:21:59.209291 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:21:59.209379 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:21:59.218057 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:21:59.226885 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:21:59.226953 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:21:59.235710 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:21:59.244978 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:21:59.245075 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:21:59.254157 1012241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 20:21:59.298227 1012241 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:21:59.298564 1012241 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:21:59.316122 1012241 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 20:21:59.316195 1012241 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 20:21:59.316237 1012241 kubeadm.go:310] OS: Linux
	I0819 20:21:59.316288 1012241 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 20:21:59.316337 1012241 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 20:21:59.316387 1012241 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 20:21:59.316438 1012241 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 20:21:59.316488 1012241 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 20:21:59.316538 1012241 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 20:21:59.316585 1012241 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 20:21:59.316634 1012241 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 20:21:59.316683 1012241 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 20:21:59.384211 1012241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:21:59.384333 1012241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:21:59.384426 1012241 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:21:59.391216 1012241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:21:59.394946 1012241 out.go:235]   - Generating certificates and keys ...
	I0819 20:21:59.395051 1012241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:21:59.395120 1012241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:21:59.496591 1012241 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 20:22:00.087810 1012241 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 20:22:00.855824 1012241 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 20:22:01.406449 1012241 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 20:22:01.944783 1012241 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 20:22:01.945121 1012241 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-199708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:22:03.327965 1012241 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 20:22:03.328316 1012241 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-199708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:22:03.517207 1012241 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 20:22:03.905872 1012241 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 20:22:04.317779 1012241 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 20:22:04.318029 1012241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:22:04.834976 1012241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:22:05.248754 1012241 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:22:05.609469 1012241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:22:05.998263 1012241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:22:06.485971 1012241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:22:06.486808 1012241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:22:06.489850 1012241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:22:06.492955 1012241 out.go:235]   - Booting up control plane ...
	I0819 20:22:06.493066 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:22:06.493142 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:22:06.493207 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:22:06.507389 1012241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:22:06.515693 1012241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:22:06.516095 1012241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:22:06.613581 1012241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:22:06.613723 1012241 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:22:08.114935 1012241 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501535655s
	I0819 20:22:08.115026 1012241 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:22:14.118330 1012241 kubeadm.go:310] [api-check] The API server is healthy after 6.001276124s
	I0819 20:22:14.137315 1012241 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 20:22:14.151167 1012241 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 20:22:14.178434 1012241 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 20:22:14.178676 1012241 kubeadm.go:310] [mark-control-plane] Marking the node addons-199708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 20:22:14.190646 1012241 kubeadm.go:310] [bootstrap-token] Using token: 2z756t.aqpurkuidy5qgcsv
	I0819 20:22:14.193471 1012241 out.go:235]   - Configuring RBAC rules ...
	I0819 20:22:14.193649 1012241 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 20:22:14.199099 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 20:22:14.207161 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 20:22:14.211809 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 20:22:14.217831 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 20:22:14.223758 1012241 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 20:22:14.526289 1012241 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 20:22:14.975063 1012241 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 20:22:15.524712 1012241 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 20:22:15.525947 1012241 kubeadm.go:310] 
	I0819 20:22:15.526022 1012241 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 20:22:15.526028 1012241 kubeadm.go:310] 
	I0819 20:22:15.526103 1012241 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 20:22:15.526108 1012241 kubeadm.go:310] 
	I0819 20:22:15.526133 1012241 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 20:22:15.526190 1012241 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 20:22:15.526240 1012241 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 20:22:15.526244 1012241 kubeadm.go:310] 
	I0819 20:22:15.526296 1012241 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 20:22:15.526301 1012241 kubeadm.go:310] 
	I0819 20:22:15.526358 1012241 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 20:22:15.526364 1012241 kubeadm.go:310] 
	I0819 20:22:15.526415 1012241 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 20:22:15.526487 1012241 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 20:22:15.526553 1012241 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 20:22:15.526558 1012241 kubeadm.go:310] 
	I0819 20:22:15.526640 1012241 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 20:22:15.526714 1012241 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 20:22:15.526719 1012241 kubeadm.go:310] 
	I0819 20:22:15.526807 1012241 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2z756t.aqpurkuidy5qgcsv \
	I0819 20:22:15.526908 1012241 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90d2106fd3f826fb0274ca14be0cbc03f42e5b76c699b68b73c6c89fab9fb6bb \
	I0819 20:22:15.526928 1012241 kubeadm.go:310] 	--control-plane 
	I0819 20:22:15.526933 1012241 kubeadm.go:310] 
	I0819 20:22:15.527015 1012241 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 20:22:15.527020 1012241 kubeadm.go:310] 
	I0819 20:22:15.527099 1012241 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2z756t.aqpurkuidy5qgcsv \
	I0819 20:22:15.527198 1012241 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90d2106fd3f826fb0274ca14be0cbc03f42e5b76c699b68b73c6c89fab9fb6bb 
	I0819 20:22:15.531296 1012241 kubeadm.go:310] W0819 20:21:59.289952    1194 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:15.531593 1012241 kubeadm.go:310] W0819 20:21:59.295526    1194 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:15.531802 1012241 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 20:22:15.531908 1012241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:22:15.531931 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:22:15.531943 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:22:15.536973 1012241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 20:22:15.539505 1012241 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 20:22:15.543597 1012241 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 20:22:15.543622 1012241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 20:22:15.562706 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 20:22:15.839514 1012241 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 20:22:15.839666 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:15.839761 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-199708 minikube.k8s.io/updated_at=2024_08_19T20_22_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8 minikube.k8s.io/name=addons-199708 minikube.k8s.io/primary=true
	I0819 20:22:15.965860 1012241 ops.go:34] apiserver oom_adj: -16
	I0819 20:22:15.965954 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.466736 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.966851 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:17.467012 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:17.966269 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:18.466557 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:18.966834 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:19.466447 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:19.609943 1012241 kubeadm.go:1113] duration metric: took 3.770333958s to wait for elevateKubeSystemPrivileges
	I0819 20:22:19.609969 1012241 kubeadm.go:394] duration metric: took 20.493757545s to StartCluster
	I0819 20:22:19.609985 1012241 settings.go:142] acquiring lock: {Name:mk3a0c8d8afbf5cfbc8b518d1bda35579f7cba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:19.610724 1012241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:22:19.611135 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/kubeconfig: {Name:mk82300af76d6335c7b97db5d9d0a0f9960b80de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:19.611373 1012241 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:22:19.611502 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 20:22:19.611787 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:22:19.611817 1012241 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 20:22:19.611895 1012241 addons.go:69] Setting yakd=true in profile "addons-199708"
	I0819 20:22:19.611917 1012241 addons.go:234] Setting addon yakd=true in "addons-199708"
	I0819 20:22:19.611942 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.612429 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.613083 1012241 addons.go:69] Setting inspektor-gadget=true in profile "addons-199708"
	I0819 20:22:19.613120 1012241 addons.go:234] Setting addon inspektor-gadget=true in "addons-199708"
	I0819 20:22:19.613150 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.613585 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.614104 1012241 addons.go:69] Setting cloud-spanner=true in profile "addons-199708"
	I0819 20:22:19.614138 1012241 addons.go:234] Setting addon cloud-spanner=true in "addons-199708"
	I0819 20:22:19.614165 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.614564 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.621668 1012241 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-199708"
	I0819 20:22:19.621749 1012241 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-199708"
	I0819 20:22:19.621782 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.622234 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.622641 1012241 addons.go:69] Setting metrics-server=true in profile "addons-199708"
	I0819 20:22:19.622727 1012241 addons.go:234] Setting addon metrics-server=true in "addons-199708"
	I0819 20:22:19.622813 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.623983 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.626922 1012241 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-199708"
	I0819 20:22:19.626970 1012241 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-199708"
	I0819 20:22:19.627008 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.627440 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.628927 1012241 addons.go:69] Setting default-storageclass=true in profile "addons-199708"
	I0819 20:22:19.648481 1012241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-199708"
	I0819 20:22:19.648878 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.629118 1012241 addons.go:69] Setting gcp-auth=true in profile "addons-199708"
	I0819 20:22:19.657796 1012241 mustload.go:65] Loading cluster: addons-199708
	I0819 20:22:19.658011 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:22:19.658325 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.629132 1012241 addons.go:69] Setting ingress=true in profile "addons-199708"
	I0819 20:22:19.673992 1012241 addons.go:234] Setting addon ingress=true in "addons-199708"
	I0819 20:22:19.629140 1012241 addons.go:69] Setting ingress-dns=true in profile "addons-199708"
	I0819 20:22:19.674945 1012241 addons.go:234] Setting addon ingress-dns=true in "addons-199708"
	I0819 20:22:19.675002 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.629363 1012241 out.go:177] * Verifying Kubernetes components...
	I0819 20:22:19.648342 1012241 addons.go:69] Setting registry=true in profile "addons-199708"
	I0819 20:22:19.648357 1012241 addons.go:69] Setting storage-provisioner=true in profile "addons-199708"
	I0819 20:22:19.648364 1012241 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-199708"
	I0819 20:22:19.648368 1012241 addons.go:69] Setting volcano=true in profile "addons-199708"
	I0819 20:22:19.648380 1012241 addons.go:69] Setting volumesnapshots=true in profile "addons-199708"
	I0819 20:22:19.674863 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.675644 1012241 addons.go:234] Setting addon registry=true in "addons-199708"
	I0819 20:22:19.677627 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.683167 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.684437 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.683363 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:22:19.676150 1012241 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-199708"
	I0819 20:22:19.687745 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.690357 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.676165 1012241 addons.go:234] Setting addon volcano=true in "addons-199708"
	I0819 20:22:19.717541 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.718184 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.676176 1012241 addons.go:234] Setting addon volumesnapshots=true in "addons-199708"
	I0819 20:22:19.730153 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.730886 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.675924 1012241 addons.go:234] Setting addon storage-provisioner=true in "addons-199708"
	I0819 20:22:19.766735 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.767421 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.776007 1012241 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 20:22:19.776315 1012241 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 20:22:19.776479 1012241 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 20:22:19.781721 1012241 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:19.783614 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 20:22:19.783695 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.795194 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 20:22:19.795267 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 20:22:19.795355 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.799388 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 20:22:19.799776 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 20:22:19.799792 1012241 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 20:22:19.799868 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.819047 1012241 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 20:22:19.824377 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 20:22:19.824410 1012241 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 20:22:19.824449 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 20:22:19.824506 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.837735 1012241 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 20:22:19.862617 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 20:22:19.868609 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 20:22:19.872329 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 20:22:19.877789 1012241 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:19.877815 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 20:22:19.877888 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.879520 1012241 addons.go:234] Setting addon default-storageclass=true in "addons-199708"
	I0819 20:22:19.879556 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.879970 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.921655 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 20:22:19.924490 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 20:22:19.933742 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.945667 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 20:22:19.948322 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 20:22:19.948356 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 20:22:19.948426 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.953495 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 20:22:19.954637 1012241 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 20:22:19.960283 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 20:22:19.963080 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 20:22:19.963112 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 20:22:19.963208 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.979533 1012241 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:19.979569 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 20:22:19.979647 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.994554 1012241 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-199708"
	I0819 20:22:19.994608 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.995106 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:20.016607 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 20:22:20.022699 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:20.024836 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 20:22:20.029513 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 20:22:20.029559 1012241 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 20:22:20.029685 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:20.029856 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	W0819 20:22:20.051946 1012241 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 20:22:20.052599 1012241 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:20.052618 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 20:22:20.052681 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.053158 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.082190 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:22:20.082851 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.083700 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.085216 1012241 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:20.085237 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 20:22:20.085310 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.129221 1012241 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:20.129243 1012241 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 20:22:20.129309 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.149998 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.160156 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.173034 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.216219 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.238861 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.242485 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.244696 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.249737 1012241 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 20:22:20.252551 1012241 out.go:177]   - Using image docker.io/busybox:stable
	I0819 20:22:20.256699 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.257632 1012241 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:20.257653 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 20:22:20.257721 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	W0819 20:22:20.266329 1012241 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 20:22:20.266362 1012241 retry.go:31] will retry after 204.966099ms: ssh: handshake failed: EOF
	I0819 20:22:20.268520 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.306475 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 20:22:20.315969 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.373471 1012241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:22:20.493482 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 20:22:20.493570 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 20:22:20.538764 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 20:22:20.538836 1012241 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 20:22:20.549491 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 20:22:20.549571 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 20:22:20.556266 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 20:22:20.556334 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 20:22:20.600851 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 20:22:20.600929 1012241 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 20:22:20.624848 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 20:22:20.624921 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 20:22:20.632377 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 20:22:20.632451 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 20:22:20.663990 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:20.692929 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 20:22:20.692999 1012241 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 20:22:20.695695 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:20.698011 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:20.747179 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:20.752228 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:20.755503 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 20:22:20.755576 1012241 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 20:22:20.757190 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 20:22:20.757247 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 20:22:20.776772 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:20.818545 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:20.845433 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 20:22:20.845505 1012241 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 20:22:20.853871 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:20.853941 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 20:22:20.885588 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 20:22:20.885686 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 20:22:20.916560 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 20:22:20.916582 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 20:22:20.965576 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:20.965612 1012241 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 20:22:21.064716 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:21.079157 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:21.079237 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 20:22:21.145137 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 20:22:21.145213 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 20:22:21.196813 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 20:22:21.196896 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 20:22:21.209322 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:21.213993 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 20:22:21.214068 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 20:22:21.329982 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 20:22:21.330056 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 20:22:21.355456 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:21.410252 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 20:22:21.410332 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 20:22:21.480194 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:21.480267 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 20:22:21.503364 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 20:22:21.503441 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 20:22:21.574543 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 20:22:21.574621 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 20:22:21.639779 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:21.656252 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 20:22:21.656322 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 20:22:21.685185 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 20:22:21.685260 1012241 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 20:22:21.764573 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 20:22:21.764650 1012241 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 20:22:21.801338 1012241 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:21.801408 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 20:22:21.839908 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 20:22:21.839979 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 20:22:21.870557 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:21.899067 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 20:22:21.899142 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 20:22:21.985947 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:21.986021 1012241 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 20:22:22.065511 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:23.855136 1012241 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.481590046s)
	I0819 20:22:23.855244 1012241 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.548687926s)
	I0819 20:22:23.855383 1012241 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 20:22:23.856972 1012241 node_ready.go:35] waiting up to 6m0s for node "addons-199708" to be "Ready" ...
	I0819 20:22:24.716966 1012241 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-199708" context rescaled to 1 replicas
	I0819 20:22:24.962051 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.297979096s)
	I0819 20:22:24.962161 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.266409446s)
	I0819 20:22:25.775387 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.077306584s)
	I0819 20:22:25.883801 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:26.915995 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.168733835s)
	I0819 20:22:26.916075 1012241 addons.go:475] Verifying addon ingress=true in "addons-199708"
	I0819 20:22:26.916268 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.163978114s)
	I0819 20:22:26.916414 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.139574894s)
	I0819 20:22:26.916518 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.097905065s)
	I0819 20:22:26.916585 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.851843704s)
	I0819 20:22:26.916641 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.707254171s)
	I0819 20:22:26.916975 1012241 addons.go:475] Verifying addon metrics-server=true in "addons-199708"
	I0819 20:22:26.916666 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.561138141s)
	I0819 20:22:26.917011 1012241 addons.go:475] Verifying addon registry=true in "addons-199708"
	I0819 20:22:26.916719 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.276860132s)
	I0819 20:22:26.918612 1012241 out.go:177] * Verifying registry addon...
	I0819 20:22:26.920669 1012241 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-199708 service yakd-dashboard -n yakd-dashboard
	
	I0819 20:22:26.921351 1012241 out.go:177] * Verifying ingress addon...
	I0819 20:22:26.921721 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 20:22:26.924605 1012241 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 20:22:26.935092 1012241 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 20:22:26.935173 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:26.935430 1012241 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 20:22:26.935451 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:26.978952 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.108313678s)
	W0819 20:22:26.979027 1012241 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:26.979062 1012241 retry.go:31] will retry after 240.59173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:27.220787 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:27.228666 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.163052943s)
	I0819 20:22:27.228741 1012241 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-199708"
	I0819 20:22:27.233297 1012241 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 20:22:27.236896 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 20:22:27.258373 1012241 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 20:22:27.258401 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:27.427760 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:27.431641 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:27.764654 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:27.925898 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:27.930110 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.256140 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:28.361879 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:28.425899 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.429045 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.743569 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:28.926312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.929073 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.241845 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.425626 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.428282 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.741346 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.955358 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.955975 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.241668 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.401885 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.180998384s)
	I0819 20:22:30.425974 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.429007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.551280 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 20:22:30.551422 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:30.574299 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:30.699609 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 20:22:30.720399 1012241 addons.go:234] Setting addon gcp-auth=true in "addons-199708"
	I0819 20:22:30.720459 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:30.720936 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:30.741238 1012241 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 20:22:30.741293 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:30.742711 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.775112 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:30.860987 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:30.868697 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:30.871540 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 20:22:30.874656 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 20:22:30.874717 1012241 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 20:22:30.899595 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 20:22:30.899619 1012241 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 20:22:30.920544 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:30.920565 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 20:22:30.925915 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.931223 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.956892 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:31.241567 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.429084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.433942 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:31.595080 1012241 addons.go:475] Verifying addon gcp-auth=true in "addons-199708"
	I0819 20:22:31.598023 1012241 out.go:177] * Verifying gcp-auth addon...
	I0819 20:22:31.602450 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 20:22:31.608594 1012241 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 20:22:31.608665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:31.747896 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.925881 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.929353 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.105958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:32.241248 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.426327 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.527562 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.609463 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:32.746086 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.862059 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:32.925811 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.929107 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.106809 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:33.240927 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:33.430554 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.431352 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.605713 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:33.740931 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:33.925349 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.928133 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.107489 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:34.240842 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.425004 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.428331 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.605958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:34.741048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.925332 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.929007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.107046 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:35.241508 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.361052 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:35.426762 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.429087 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.606730 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:35.740754 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.925151 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.928258 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.106713 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:36.240729 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.425225 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.427977 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.606383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:36.740936 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.926023 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.928658 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.106360 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:37.241048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.426796 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.428756 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.605982 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:37.740393 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.860408 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:37.925488 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.928123 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.105525 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:38.240863 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.425273 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.428465 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.605673 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:38.741103 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.925318 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.928083 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.106465 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:39.240968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.425201 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.428514 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.606533 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:39.740759 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.925535 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.928362 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.106579 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:40.241147 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.361076 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:40.425812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.429498 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.605584 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:40.741025 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.924846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.927986 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.106485 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:41.240749 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.424983 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.428064 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.606960 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:41.740362 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.925863 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.928255 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.117971 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:42.241868 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.361344 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:42.425095 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.428403 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.605412 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:42.741213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.925030 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.929457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.105660 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:43.241206 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.425541 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.428723 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.606767 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:43.740536 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.925122 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.928173 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.106575 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:44.241076 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.362061 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:44.424935 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.427844 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.606185 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:44.740698 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.925500 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.928363 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.106862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:45.243680 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.428216 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.430779 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.606065 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:45.741144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.925224 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.928525 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.106410 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:46.240905 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.424743 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.428303 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.605859 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:46.740970 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.861207 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:46.925470 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.928439 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.105731 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:47.240801 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.425193 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.428555 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.605992 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:47.741013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.924902 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.928381 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.105850 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:48.241063 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.425722 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.428578 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.606538 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:48.741033 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.863119 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:48.926352 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.929561 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.105958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:49.242858 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.425872 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.429496 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.616606 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:49.741971 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.924914 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.928257 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.109741 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:50.241533 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.429146 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.432002 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.606477 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:50.741001 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.924955 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.928457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.115178 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:51.241334 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.361180 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:51.425772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.429487 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.606312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:51.740968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.925147 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.927836 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.107026 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:52.240421 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.425670 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.429110 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.606669 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:52.741162 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.925388 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.928030 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.106450 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:53.240947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.426562 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.429312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.605486 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:53.740958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.861430 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:53.925961 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.928530 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.105717 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:54.240716 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.425053 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.429322 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.605825 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:54.740144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.924916 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.928294 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.109808 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:55.240433 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.425284 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.429338 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.606494 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:55.740686 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.861509 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:55.924755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.928210 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.108780 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:56.241068 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.426682 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.428680 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.606611 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:56.741222 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.926544 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.928813 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.106605 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:57.241259 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.425625 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.428215 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.605566 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:57.741037 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.925133 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.928687 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.106213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:58.240309 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.362158 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:58.425377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:58.429583 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.606430 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:58.741024 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.925325 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:58.928285 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.105540 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:59.241379 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.424758 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:59.428306 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.605726 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:59.741033 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.925485 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:59.928828 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.120386 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:00.241755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.426176 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:00.430017 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.606901 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:00.740547 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.860973 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:00.925504 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:00.929700 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.106242 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:01.241006 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.425699 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:01.428254 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.605616 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:01.741112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.925615 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:01.928600 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.105956 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:02.240736 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.425183 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:02.428152 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.606614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:02.741396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.924952 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:02.928923 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.106287 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:03.240846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.361108 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:03.424707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:03.428018 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.606193 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:03.740364 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.925224 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:03.927756 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.106002 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:04.240737 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.424844 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:04.428312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.605457 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:04.740978 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.925109 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:04.928783 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.106299 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:05.240576 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.424934 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:05.428410 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.605680 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:05.740202 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.861110 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:05.925417 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:05.934039 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.139278 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:06.264253 1012241 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 20:23:06.264283 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.367990 1012241 node_ready.go:49] node "addons-199708" has status "Ready":"True"
	I0819 20:23:06.368049 1012241 node_ready.go:38] duration metric: took 42.51079439s for node "addons-199708" to be "Ready" ...
	I0819 20:23:06.368061 1012241 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:23:06.410854 1012241 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:06.519384 1012241 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 20:23:06.519412 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:06.520313 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.687327 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:06.827221 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.951844 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:06.955705 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.106651 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:07.243343 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.418143 1012241 pod_ready.go:93] pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.418167 1012241 pod_ready.go:82] duration metric: took 1.007278517s for pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.418189 1012241 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.424838 1012241 pod_ready.go:93] pod "etcd-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.424865 1012241 pod_ready.go:82] duration metric: took 6.667929ms for pod "etcd-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.424881 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.426077 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:07.430677 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.432468 1012241 pod_ready.go:93] pod "kube-apiserver-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.432493 1012241 pod_ready.go:82] duration metric: took 7.603948ms for pod "kube-apiserver-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.432505 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.438162 1012241 pod_ready.go:93] pod "kube-controller-manager-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.438189 1012241 pod_ready.go:82] duration metric: took 5.675804ms for pod "kube-controller-manager-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.438207 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-99r72" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.561426 1012241 pod_ready.go:93] pod "kube-proxy-99r72" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.561451 1012241 pod_ready.go:82] duration metric: took 123.235387ms for pod "kube-proxy-99r72" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.561464 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.605634 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:07.742987 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.927210 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:07.930265 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.961378 1012241 pod_ready.go:93] pod "kube-scheduler-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.961405 1012241 pod_ready.go:82] duration metric: took 399.93288ms for pod "kube-scheduler-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.961416 1012241 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:08.113189 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:08.248110 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.426298 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:08.430282 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:08.608619 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:08.742933 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.926400 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:08.929383 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.108420 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:09.243707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.425900 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:09.431419 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.608497 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:09.753010 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.931411 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.932877 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:09.970271 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:10.107963 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:10.244363 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.431720 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:10.438715 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:10.607238 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:10.743646 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.929928 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:10.935211 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.106864 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:11.242993 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.437016 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:11.438702 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.606872 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:11.755267 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.926045 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:11.939855 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.978557 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:12.108087 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:12.261953 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.431316 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:12.433201 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:12.612775 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:12.745961 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.927415 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:12.936529 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.106530 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:13.243112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.426894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:13.436697 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.607182 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:13.756573 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.926950 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:13.933048 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.106917 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:14.242275 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.425535 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:14.429131 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.471502 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:14.607707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:14.742823 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.931004 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:14.931939 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.111773 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:15.243013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.425208 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:15.431312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.606223 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:15.741865 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.926604 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:15.929792 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.106880 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:16.242579 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.426826 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:16.431106 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.606783 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:16.741846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.926260 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:16.929354 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.971613 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:17.106552 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:17.243925 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:17.426086 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:17.430223 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:17.607383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:17.742891 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:17.926772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:17.931520 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.106855 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:18.243739 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:18.427393 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:18.433766 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.607596 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:18.744973 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:18.926396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:18.931800 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.974817 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:19.106894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:19.246671 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:19.429404 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:19.433358 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:19.605816 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:19.742676 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:19.928912 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:19.934182 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.107286 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:20.242506 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:20.426711 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:20.430923 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.606968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:20.741692 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:20.925871 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:20.929630 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.106831 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:21.243158 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:21.427230 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:21.429231 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.467543 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:21.607084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:21.742084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:21.926742 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:21.930765 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.106507 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:22.241646 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:22.425965 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:22.428780 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.606938 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:22.742926 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:22.925648 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:22.929281 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.108099 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:23.242886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:23.427536 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:23.431035 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.473870 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:23.615888 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:23.742253 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:23.928230 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:23.928793 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.105944 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:24.243046 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:24.428320 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:24.433709 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.606603 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:24.744723 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:24.926723 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:24.932630 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.108431 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:25.242319 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:25.446337 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:25.467030 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.609112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:25.746233 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:25.926763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:25.930667 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.968104 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:26.106755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:26.242148 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:26.529506 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:26.530804 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:26.650135 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:26.742518 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:26.927061 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:26.930018 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.106666 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:27.243421 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:27.426709 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:27.429793 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.606868 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:27.742281 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:27.925853 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:27.928635 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.106013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:28.242198 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:28.425893 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:28.428823 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.468478 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:28.606443 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:28.742259 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:28.925862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:28.929981 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.106986 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:29.242359 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:29.426212 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:29.431372 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.606662 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:29.742603 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:29.927969 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:29.929312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.108552 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:30.243245 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:30.428895 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:30.430113 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.606665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:30.741867 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:30.928390 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:30.934260 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.969308 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:31.107845 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:31.245018 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:31.426240 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:31.430773 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:31.611519 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:31.743930 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:31.927090 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:31.929769 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.106614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:32.242324 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:32.428048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:32.429781 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.606827 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:32.742145 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:32.925149 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:32.928790 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.106812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:33.244463 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:33.426630 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:33.430239 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.473106 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:33.607251 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:33.743705 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:33.926477 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:33.931404 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.107521 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:34.242890 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:34.428656 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:34.430124 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.605996 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:34.742758 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:34.926392 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:34.930731 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.106810 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:35.242454 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:35.454111 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.454704 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:35.475759 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:35.605965 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:35.741956 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:35.928234 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:35.932677 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.106883 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:36.241869 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:36.425413 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:36.429293 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.606404 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:36.742731 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:36.927462 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:36.930604 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.106942 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:37.241886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:37.434925 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:37.437377 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.606631 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:37.742268 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:37.925766 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:37.928469 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.967546 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:38.108973 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:38.242364 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:38.431241 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:38.434765 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:38.624385 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:38.742659 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:38.926662 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:38.929183 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.105995 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:39.249137 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:39.435151 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:39.435420 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.606999 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:39.742340 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:39.925953 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:39.928733 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.968689 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:40.107008 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:40.242670 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:40.434292 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:40.439798 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:40.606347 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:40.741257 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:40.926572 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:40.928419 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.106673 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:41.242295 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:41.432763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:41.433810 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.610349 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:41.744939 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:41.942213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:41.944701 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.984196 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:42.107251 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:42.242964 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:42.429014 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:42.434440 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:42.606888 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:42.742969 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:42.927770 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:42.931378 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:43.106788 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:43.243234 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:43.427061 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:43.432064 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:43.606665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:43.743065 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:43.926273 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:43.931145 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:44.107279 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:44.242807 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:44.426780 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:44.431234 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:44.469170 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:44.607378 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:44.742856 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:44.930947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:44.933648 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:45.114757 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:45.243519 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:45.434944 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:45.435653 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:45.607174 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:45.744762 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:45.929817 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:45.932132 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:46.106832 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:46.242963 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:46.426114 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:46.431496 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:46.471709 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:46.605985 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:46.741454 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:46.926573 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:46.944027 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:47.107199 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:47.242312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:47.435744 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:47.436909 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:47.607040 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:47.742095 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:47.926241 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:47.931054 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.106793 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:48.241882 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:48.425791 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:48.428808 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.606284 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:48.742610 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:48.927118 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:48.933035 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.971710 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:49.113118 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:49.243486 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:49.426862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:49.429999 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:49.606542 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:49.742524 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:49.926325 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:49.929545 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:50.107470 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:50.241947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:50.426194 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:50.428740 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:50.606190 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:50.742381 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:50.925672 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:50.928489 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:51.106394 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:51.244407 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:51.426812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:51.431189 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:51.476804 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:51.607274 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:51.742661 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:51.927652 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:51.933339 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:52.107408 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:52.242916 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:52.427498 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:52.435090 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:52.606990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:52.745083 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:52.925795 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:52.930976 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.107187 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.242568 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:53.426201 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:53.430602 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.606818 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.742172 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:53.938906 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:53.946162 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.974016 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:54.106146 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.241814 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:54.425811 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:54.428228 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:54.606057 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.742322 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:54.926288 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:54.942120 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:55.106846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:55.242518 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:55.425989 1012241 kapi.go:107] duration metric: took 1m28.504263748s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 20:23:55.428435 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:55.606792 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:55.742705 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:55.929222 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:56.106208 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:56.242507 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:56.430007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:56.468863 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:56.607121 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:56.742783 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:56.929835 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:57.106319 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:57.244123 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:57.432349 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:57.606745 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:57.741848 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:57.929234 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:58.114165 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:58.241990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:58.429930 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:58.476249 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:58.607395 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:58.742858 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:58.929775 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:59.106582 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:59.242772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:59.434457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:59.606757 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:59.742434 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:59.931306 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:00.135247 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:00.257261 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:00.559170 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:00.560676 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:00.607377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:00.744080 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:00.929409 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:01.107343 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:01.243377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:01.433571 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:01.607188 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:01.743222 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:01.930317 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.107162 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:02.242751 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:02.429580 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.607185 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:02.744614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:02.929525 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.968822 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:03.107144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:03.242410 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:03.430998 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:03.618882 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:03.743115 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:03.931415 1012241 kapi.go:107] duration metric: took 1m37.006805239s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 20:24:04.107015 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:04.242192 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:04.606355 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:04.741695 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:04.968964 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:05.108942 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:05.246114 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:05.607881 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:05.742002 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:06.106715 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:06.241954 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:06.606735 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:06.741293 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:07.107383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:07.242426 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:07.468562 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:07.606702 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:07.742218 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:08.106493 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:08.244691 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:08.605990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:08.741726 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:09.107592 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:09.242606 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:09.471398 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:09.610249 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:09.748396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:10.107379 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:10.246776 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:10.606733 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:10.742801 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.126894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:11.241712 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.610338 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:11.742892 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.969125 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:12.109258 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:12.242082 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:12.613176 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:12.743057 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:13.107217 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:13.244499 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:13.606427 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:13.745367 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:14.109361 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:14.243696 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:14.471998 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:14.606097 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:14.742760 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:15.110878 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:15.242423 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:15.606886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:15.743828 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.107118 1012241 kapi.go:107] duration metric: took 1m44.504665719s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 20:24:16.109941 1012241 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-199708 cluster.
	I0819 20:24:16.112759 1012241 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 20:24:16.115432 1012241 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 20:24:16.244763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.475345 1012241 pod_ready.go:93] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"True"
	I0819 20:24:16.475370 1012241 pod_ready.go:82] duration metric: took 1m8.513946192s for pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.475390 1012241 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.489003 1012241 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace has status "Ready":"True"
	I0819 20:24:16.489107 1012241 pod_ready.go:82] duration metric: took 13.707029ms for pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.489153 1012241 pod_ready.go:39] duration metric: took 1m10.121073383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:24:16.489202 1012241 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:24:16.489273 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:16.489419 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:16.570925 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:16.570998 1012241 cri.go:89] found id: ""
	I0819 20:24:16.571021 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:16.571140 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.578051 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:16.578169 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:16.632509 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:16.632590 1012241 cri.go:89] found id: ""
	I0819 20:24:16.632613 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:16.632742 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.638483 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:16.638603 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:16.711434 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:16.711540 1012241 cri.go:89] found id: ""
	I0819 20:24:16.711566 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:16.711779 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.721069 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:16.721297 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:16.745367 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.801901 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:16.801926 1012241 cri.go:89] found id: ""
	I0819 20:24:16.801936 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:16.801996 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.810474 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:16.810555 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:16.903763 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:16.903788 1012241 cri.go:89] found id: ""
	I0819 20:24:16.903797 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:16.903854 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.910320 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:16.910457 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:17.024005 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:17.024078 1012241 cri.go:89] found id: ""
	I0819 20:24:17.024099 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:17.024194 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:17.032001 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:17.032137 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:17.106335 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:17.106362 1012241 cri.go:89] found id: ""
	I0819 20:24:17.106370 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:17.106462 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:17.110824 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:17.110862 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:17.242732 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:17.580241 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:17.580319 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:17.677887 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:17.677965 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:17.747460 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:17.759447 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:17.759620 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:17.827313 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:17.827398 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:17.950772 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:17.950811 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:18.001915 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:18.001953 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:18.089272 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.089525 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:18.089744 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.089975 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:18.132482 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:18.132566 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:18.155620 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:18.155691 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:18.243244 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:18.251320 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:18.251354 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:18.302939 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:18.302972 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:18.349975 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:18.350006 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:18.391478 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:18.391506 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:18.391556 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:18.391579 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.391587 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:18.391600 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.391606 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:18.391619 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:18.391626 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:18.742261 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:19.242706 1012241 kapi.go:107] duration metric: took 1m52.005821099s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 20:24:19.244386 1012241 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 20:24:19.246034 1012241 addons.go:510] duration metric: took 1m59.634203667s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 20:24:28.393157 1012241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:24:28.407073 1012241 api_server.go:72] duration metric: took 2m8.795670662s to wait for apiserver process to appear ...
	I0819 20:24:28.407098 1012241 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:24:28.407135 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:28.407197 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:28.454246 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:28.454294 1012241 cri.go:89] found id: ""
	I0819 20:24:28.454302 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:28.454362 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.457763 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:28.457831 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:28.498431 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:28.498453 1012241 cri.go:89] found id: ""
	I0819 20:24:28.498461 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:28.498516 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.501884 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:28.501955 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:28.547991 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:28.548013 1012241 cri.go:89] found id: ""
	I0819 20:24:28.548021 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:28.548084 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.551656 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:28.551738 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:28.591665 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:28.591689 1012241 cri.go:89] found id: ""
	I0819 20:24:28.591698 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:28.591765 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.595477 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:28.595555 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:28.634837 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:28.634862 1012241 cri.go:89] found id: ""
	I0819 20:24:28.634870 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:28.634927 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.638513 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:28.638584 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:28.682379 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:28.682412 1012241 cri.go:89] found id: ""
	I0819 20:24:28.682447 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:28.682521 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.686046 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:28.686140 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:28.727462 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:28.727532 1012241 cri.go:89] found id: ""
	I0819 20:24:28.727540 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:28.727601 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.731328 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:28.731356 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:28.774182 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:28.774213 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:28.829817 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:28.829851 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:28.931277 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:28.931312 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:28.986656 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:28.986902 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:28.987092 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:28.987324 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:29.024723 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:29.024757 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:29.041183 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:29.041211 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:29.112777 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:29.112811 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:29.165323 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:29.165357 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:29.212570 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:29.212603 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:29.351681 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:29.351717 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:29.391732 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:29.391764 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:29.477318 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:29.477354 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:29.541960 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:29.541988 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:29.542044 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:29.542060 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:29.542075 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:29.542082 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:29.542091 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:29.542099 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:29.542106 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:39.543323 1012241 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:24:39.552891 1012241 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 20:24:39.554226 1012241 api_server.go:141] control plane version: v1.31.0
	I0819 20:24:39.554250 1012241 api_server.go:131] duration metric: took 11.147145485s to wait for apiserver health ...
	I0819 20:24:39.554259 1012241 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:24:39.554283 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:39.554356 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:39.609850 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:39.609881 1012241 cri.go:89] found id: ""
	I0819 20:24:39.609890 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:39.609952 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.613500 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:39.613575 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:39.654987 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:39.655009 1012241 cri.go:89] found id: ""
	I0819 20:24:39.655017 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:39.655078 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.659467 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:39.659537 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:39.733901 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:39.733923 1012241 cri.go:89] found id: ""
	I0819 20:24:39.733931 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:39.733987 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.737487 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:39.737563 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:39.783938 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:39.783963 1012241 cri.go:89] found id: ""
	I0819 20:24:39.783970 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:39.784033 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.787772 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:39.787844 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:39.836687 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:39.836712 1012241 cri.go:89] found id: ""
	I0819 20:24:39.836720 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:39.836778 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.840569 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:39.840656 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:39.892838 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:39.892862 1012241 cri.go:89] found id: ""
	I0819 20:24:39.892870 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:39.892929 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.900058 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:39.900187 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:40.030238 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:40.030266 1012241 cri.go:89] found id: ""
	I0819 20:24:40.030279 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:40.030406 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:40.040785 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:40.040817 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:40.096987 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:40.097241 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:40.097430 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:40.097761 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:40.140134 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:40.140172 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:40.379574 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:40.379606 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:40.459970 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:40.460001 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:40.511002 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:40.511038 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:40.609475 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:40.609512 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:40.627600 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:40.627633 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:40.708954 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:40.708986 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:40.773054 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:40.773097 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:40.818995 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:40.819028 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:40.862651 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:40.862683 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:40.959927 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:40.959973 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:41.009965 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:41.009995 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:41.010086 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:41.010114 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:41.010130 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:41.010148 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:41.010162 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:41.010169 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:41.010180 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:51.024938 1012241 system_pods.go:59] 18 kube-system pods found
	I0819 20:24:51.024987 1012241 system_pods.go:61] "coredns-6f6b679f8f-6n4mb" [c3402fe2-9566-4f90-a512-9f614a55dece] Running
	I0819 20:24:51.024994 1012241 system_pods.go:61] "csi-hostpath-attacher-0" [9c27b821-4447-46b9-b1ad-2aa93595632b] Running
	I0819 20:24:51.024999 1012241 system_pods.go:61] "csi-hostpath-resizer-0" [935e08cf-2eb9-45aa-88d7-22e89cc8528c] Running
	I0819 20:24:51.025003 1012241 system_pods.go:61] "csi-hostpathplugin-mp2fj" [c6450a00-7d90-4f5f-ac88-97e1805effe3] Running
	I0819 20:24:51.025007 1012241 system_pods.go:61] "etcd-addons-199708" [f3b7d38f-e384-4ac0-a896-f06a60a5b650] Running
	I0819 20:24:51.025012 1012241 system_pods.go:61] "kindnet-frmsm" [293a5e8d-a8b5-470d-a110-bde48e311ad7] Running
	I0819 20:24:51.025016 1012241 system_pods.go:61] "kube-apiserver-addons-199708" [7eef55b1-1f3d-4d7d-a66f-a2b96d167158] Running
	I0819 20:24:51.025020 1012241 system_pods.go:61] "kube-controller-manager-addons-199708" [6299ec89-5e0a-4fbc-a136-274a9f0ad339] Running
	I0819 20:24:51.025026 1012241 system_pods.go:61] "kube-ingress-dns-minikube" [18bbf659-adcd-4f3c-8a24-47c9af3dcf74] Running
	I0819 20:24:51.025032 1012241 system_pods.go:61] "kube-proxy-99r72" [36b5b22d-de71-471c-9b87-896b105a27cc] Running
	I0819 20:24:51.025036 1012241 system_pods.go:61] "kube-scheduler-addons-199708" [6cc1f06d-47de-41c0-9c60-df3cb6229707] Running
	I0819 20:24:51.025040 1012241 system_pods.go:61] "metrics-server-8988944d9-phnbr" [9ff0d452-fc9c-4259-bc8e-032f3ad5350a] Running
	I0819 20:24:51.025045 1012241 system_pods.go:61] "nvidia-device-plugin-daemonset-6p75r" [03198291-96ab-4c9c-8393-70aa68bb887b] Running
	I0819 20:24:51.025049 1012241 system_pods.go:61] "registry-6fb4cdfc84-2d8zw" [571a9575-3986-40cc-80d1-071415cf3a04] Running
	I0819 20:24:51.025053 1012241 system_pods.go:61] "registry-proxy-mtrlv" [fe09b5f8-66ed-4907-8d46-d177a6e3922f] Running
	I0819 20:24:51.025057 1012241 system_pods.go:61] "snapshot-controller-56fcc65765-65dzc" [1f0b80b1-656d-4d0a-8e51-84aeeee65b66] Running
	I0819 20:24:51.025062 1012241 system_pods.go:61] "snapshot-controller-56fcc65765-t9q62" [f4909d7e-03a0-4e63-b3e8-7addc77d9b4b] Running
	I0819 20:24:51.025067 1012241 system_pods.go:61] "storage-provisioner" [3e5f85cd-821b-4050-823b-b31a35b1d14a] Running
	I0819 20:24:51.025074 1012241 system_pods.go:74] duration metric: took 11.470808338s to wait for pod list to return data ...
	I0819 20:24:51.025119 1012241 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:24:51.028830 1012241 default_sa.go:45] found service account: "default"
	I0819 20:24:51.028863 1012241 default_sa.go:55] duration metric: took 3.727656ms for default service account to be created ...
	I0819 20:24:51.028874 1012241 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:24:51.040327 1012241 system_pods.go:86] 18 kube-system pods found
	I0819 20:24:51.040374 1012241 system_pods.go:89] "coredns-6f6b679f8f-6n4mb" [c3402fe2-9566-4f90-a512-9f614a55dece] Running
	I0819 20:24:51.040385 1012241 system_pods.go:89] "csi-hostpath-attacher-0" [9c27b821-4447-46b9-b1ad-2aa93595632b] Running
	I0819 20:24:51.040391 1012241 system_pods.go:89] "csi-hostpath-resizer-0" [935e08cf-2eb9-45aa-88d7-22e89cc8528c] Running
	I0819 20:24:51.040396 1012241 system_pods.go:89] "csi-hostpathplugin-mp2fj" [c6450a00-7d90-4f5f-ac88-97e1805effe3] Running
	I0819 20:24:51.040402 1012241 system_pods.go:89] "etcd-addons-199708" [f3b7d38f-e384-4ac0-a896-f06a60a5b650] Running
	I0819 20:24:51.040407 1012241 system_pods.go:89] "kindnet-frmsm" [293a5e8d-a8b5-470d-a110-bde48e311ad7] Running
	I0819 20:24:51.040412 1012241 system_pods.go:89] "kube-apiserver-addons-199708" [7eef55b1-1f3d-4d7d-a66f-a2b96d167158] Running
	I0819 20:24:51.040418 1012241 system_pods.go:89] "kube-controller-manager-addons-199708" [6299ec89-5e0a-4fbc-a136-274a9f0ad339] Running
	I0819 20:24:51.040424 1012241 system_pods.go:89] "kube-ingress-dns-minikube" [18bbf659-adcd-4f3c-8a24-47c9af3dcf74] Running
	I0819 20:24:51.040431 1012241 system_pods.go:89] "kube-proxy-99r72" [36b5b22d-de71-471c-9b87-896b105a27cc] Running
	I0819 20:24:51.040436 1012241 system_pods.go:89] "kube-scheduler-addons-199708" [6cc1f06d-47de-41c0-9c60-df3cb6229707] Running
	I0819 20:24:51.040441 1012241 system_pods.go:89] "metrics-server-8988944d9-phnbr" [9ff0d452-fc9c-4259-bc8e-032f3ad5350a] Running
	I0819 20:24:51.040450 1012241 system_pods.go:89] "nvidia-device-plugin-daemonset-6p75r" [03198291-96ab-4c9c-8393-70aa68bb887b] Running
	I0819 20:24:51.040455 1012241 system_pods.go:89] "registry-6fb4cdfc84-2d8zw" [571a9575-3986-40cc-80d1-071415cf3a04] Running
	I0819 20:24:51.040459 1012241 system_pods.go:89] "registry-proxy-mtrlv" [fe09b5f8-66ed-4907-8d46-d177a6e3922f] Running
	I0819 20:24:51.040464 1012241 system_pods.go:89] "snapshot-controller-56fcc65765-65dzc" [1f0b80b1-656d-4d0a-8e51-84aeeee65b66] Running
	I0819 20:24:51.040473 1012241 system_pods.go:89] "snapshot-controller-56fcc65765-t9q62" [f4909d7e-03a0-4e63-b3e8-7addc77d9b4b] Running
	I0819 20:24:51.040477 1012241 system_pods.go:89] "storage-provisioner" [3e5f85cd-821b-4050-823b-b31a35b1d14a] Running
	I0819 20:24:51.040485 1012241 system_pods.go:126] duration metric: took 11.605134ms to wait for k8s-apps to be running ...
	I0819 20:24:51.040498 1012241 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:24:51.040564 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:24:51.058658 1012241 system_svc.go:56] duration metric: took 18.150789ms WaitForService to wait for kubelet
	I0819 20:24:51.058692 1012241 kubeadm.go:582] duration metric: took 2m31.447294246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:24:51.058737 1012241 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:24:51.063276 1012241 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:24:51.063314 1012241 node_conditions.go:123] node cpu capacity is 2
	I0819 20:24:51.063328 1012241 node_conditions.go:105] duration metric: took 4.579367ms to run NodePressure ...
	I0819 20:24:51.063342 1012241 start.go:241] waiting for startup goroutines ...
	I0819 20:24:51.063372 1012241 start.go:246] waiting for cluster config update ...
	I0819 20:24:51.063395 1012241 start.go:255] writing updated cluster config ...
	I0819 20:24:51.063746 1012241 ssh_runner.go:195] Run: rm -f paused
	I0819 20:24:51.420395 1012241 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:24:51.422289 1012241 out.go:177] * Done! kubectl is now configured to use "addons-199708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.391969071Z" level=info msg="Stopped pod sandbox (already stopped): ff9d1d2c0ff856b9738c78016efccd1dfba9df2621ac420b576a4f31faeed68a" id=7b1077df-66f1-4575-a2e7-61a7b4299491 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.392287568Z" level=info msg="Removing pod sandbox: ff9d1d2c0ff856b9738c78016efccd1dfba9df2621ac420b576a4f31faeed68a" id=654f5d4c-01ba-4e8e-9b9d-f471e41d5511 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.402446126Z" level=info msg="Removed pod sandbox: ff9d1d2c0ff856b9738c78016efccd1dfba9df2621ac420b576a4f31faeed68a" id=654f5d4c-01ba-4e8e-9b9d-f471e41d5511 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.402982024Z" level=info msg="Stopping pod sandbox: 5336f56b63833e1c70c5f10df13c7211662a7872504215432e41f5e08b2beb21" id=00112c4d-faba-48a5-ad7b-14ea99c0395d name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.403104682Z" level=info msg="Stopped pod sandbox (already stopped): 5336f56b63833e1c70c5f10df13c7211662a7872504215432e41f5e08b2beb21" id=00112c4d-faba-48a5-ad7b-14ea99c0395d name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.403415687Z" level=info msg="Removing pod sandbox: 5336f56b63833e1c70c5f10df13c7211662a7872504215432e41f5e08b2beb21" id=c8fbd52a-4396-4a9f-bb1e-37d7b8049bce name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.412295969Z" level=info msg="Removed pod sandbox: 5336f56b63833e1c70c5f10df13c7211662a7872504215432e41f5e08b2beb21" id=c8fbd52a-4396-4a9f-bb1e-37d7b8049bce name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.412778460Z" level=info msg="Stopping pod sandbox: 7825604dd8d55b555307db98057ed34fc79224a4b2020cf8d3bb5bdcb482dd02" id=34c892fe-14e7-4b28-a62e-80ce59be0204 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.412814308Z" level=info msg="Stopped pod sandbox (already stopped): 7825604dd8d55b555307db98057ed34fc79224a4b2020cf8d3bb5bdcb482dd02" id=34c892fe-14e7-4b28-a62e-80ce59be0204 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.413092591Z" level=info msg="Removing pod sandbox: 7825604dd8d55b555307db98057ed34fc79224a4b2020cf8d3bb5bdcb482dd02" id=9a23b742-624c-4674-9a02-d6b3446b57e8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.423181916Z" level=info msg="Removed pod sandbox: 7825604dd8d55b555307db98057ed34fc79224a4b2020cf8d3bb5bdcb482dd02" id=9a23b742-624c-4674-9a02-d6b3446b57e8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.589652070Z" level=warning msg="Stopping container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3f6fd735-47e0-457b-bf27-4fd2221d2657 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:29:15 addons-199708 conmon[4630]: conmon 7bcc772078f217c5bb45 <ninfo>: container 4641 exited with status 137
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.743757438Z" level=info msg="Stopped container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8: ingress-nginx/ingress-nginx-controller-bc57996ff-dl6tk/controller" id=3f6fd735-47e0-457b-bf27-4fd2221d2657 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.744794813Z" level=info msg="Stopping pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=9f395f7e-8055-444c-9f15-4f50827af1fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.748149021Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-2QXCLE4DPIYKBF56 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-N32A6SPGIXOX2FUR - [0:0]\n-X KUBE-HP-2QXCLE4DPIYKBF56\n-X KUBE-HP-N32A6SPGIXOX2FUR\nCOMMIT\n"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.749494226Z" level=info msg="Closing host port tcp:80"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.749546911Z" level=info msg="Closing host port tcp:443"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.750954212Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.750983397Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.751146498Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-dl6tk Namespace:ingress-nginx ID:e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75 UID:f4d56a1d-6a1c-4eef-8328-e7af16f27b45 NetNS:/var/run/netns/c71fabdf-1470-434f-b48b-6544e59fa7d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.751281643Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-dl6tk from CNI network \"kindnet\" (type=ptp)"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.775342013Z" level=info msg="Stopped pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=9f395f7e-8055-444c-9f15-4f50827af1fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.860579701Z" level=info msg="Removing container: 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8" id=6e02f5c7-7b6f-4bad-ae31-503e346b68b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.877042282Z" level=info msg="Removed container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8: ingress-nginx/ingress-nginx-controller-bc57996ff-dl6tk/controller" id=6e02f5c7-7b6f-4bad-ae31-503e346b68b6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9029ac3c1990f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app           0                   88c13fa84b6fd       hello-world-app-55bf9c44b4-r2xfn
	459b1733976de       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         2 minutes ago       Running             nginx                     0                   d9862f0d07964       nginx
	db3f6e8bef454       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     4 minutes ago       Running             busybox                   0                   145db0b5e9dd9       busybox
	638d90220b8aa       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   6 minutes ago       Running             metrics-server            0                   6cccec059b98a       metrics-server-8988944d9-phnbr
	4496c326dd4d9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        6 minutes ago       Running             coredns                   0                   ca321f2053e83       coredns-6f6b679f8f-6n4mb
	572c16f172949       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        6 minutes ago       Running             storage-provisioner       0                   aa638c4fe2aa1       storage-provisioner
	6bdf6081a42b6       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      6 minutes ago       Running             kindnet-cni               0                   ace2136025d5c       kindnet-frmsm
	0e164c1098e69       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        7 minutes ago       Running             kube-proxy                0                   9676de5532aa5       kube-proxy-99r72
	c5ee1a4b65685       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        7 minutes ago       Running             kube-apiserver            0                   f4881709540e9       kube-apiserver-addons-199708
	17ec9f70f07aa       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        7 minutes ago       Running             kube-controller-manager   0                   292f72858b202       kube-controller-manager-addons-199708
	7f089b595eb71       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        7 minutes ago       Running             kube-scheduler            0                   1f54969b6168f       kube-scheduler-addons-199708
	926dc4caa041d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        7 minutes ago       Running             etcd                      0                   73d962ba063cb       etcd-addons-199708
	
	
	==> coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] <==
	[INFO] 10.244.0.13:33588 - 5635 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002565193s
	[INFO] 10.244.0.13:59518 - 62271 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146879s
	[INFO] 10.244.0.13:59518 - 45882 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086334s
	[INFO] 10.244.0.13:56355 - 46575 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131527s
	[INFO] 10.244.0.13:56355 - 10491 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000069571s
	[INFO] 10.244.0.13:51837 - 20507 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073722s
	[INFO] 10.244.0.13:51837 - 16670 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076127s
	[INFO] 10.244.0.13:43970 - 33709 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052365s
	[INFO] 10.244.0.13:43970 - 16040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077817s
	[INFO] 10.244.0.13:51757 - 50128 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001669848s
	[INFO] 10.244.0.13:51757 - 42198 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001909272s
	[INFO] 10.244.0.13:58192 - 37642 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071081s
	[INFO] 10.244.0.13:58192 - 4360 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079138s
	[INFO] 10.244.0.20:36176 - 15129 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252765s
	[INFO] 10.244.0.20:53341 - 46364 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269627s
	[INFO] 10.244.0.20:42761 - 60138 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000268421s
	[INFO] 10.244.0.20:33836 - 2583 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201705s
	[INFO] 10.244.0.20:59384 - 20410 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166456s
	[INFO] 10.244.0.20:34924 - 47363 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000247867s
	[INFO] 10.244.0.20:49788 - 22871 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002725972s
	[INFO] 10.244.0.20:38962 - 38903 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003146753s
	[INFO] 10.244.0.20:38215 - 64776 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001079066s
	[INFO] 10.244.0.20:60615 - 49563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00137435s
	[INFO] 10.244.0.23:45190 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221341s
	[INFO] 10.244.0.23:60611 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161976s
	
	
	==> describe nodes <==
	Name:               addons-199708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-199708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=addons-199708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T20_22_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-199708
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-199708
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:29:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 20:27:22 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 20:27:22 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 20:27:22 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 20:27:22 +0000   Mon, 19 Aug 2024 20:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-199708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 62b70ead3e954443b0f62bd9077737ad
	  System UUID:                ede58689-a7ef-4dc9-a622-03d05ef9b23c
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  default                     hello-world-app-55bf9c44b4-r2xfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 coredns-6f6b679f8f-6n4mb                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m1s
	  kube-system                 etcd-addons-199708                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m7s
	  kube-system                 kindnet-frmsm                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m2s
	  kube-system                 kube-apiserver-addons-199708             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-controller-manager-addons-199708    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 kube-proxy-99r72                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kube-system                 kube-scheduler-addons-199708             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 metrics-server-8988944d9-phnbr           100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m56s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m55s  kube-proxy       
	  Normal   Starting                 7m7s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m7s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m6s   kubelet          Node addons-199708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m6s   kubelet          Node addons-199708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m6s   kubelet          Node addons-199708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m3s   node-controller  Node addons-199708 event: Registered Node addons-199708 in Controller
	  Normal   NodeReady                6m16s  kubelet          Node addons-199708 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] <==
	{"level":"info","ts":"2024-08-19T20:22:09.246144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T20:22:09.257804Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:09.257847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:23.595972Z","caller":"traceutil/trace.go:171","msg":"trace[1942552739] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"124.890398ms","start":"2024-08-19T20:22:23.471061Z","end":"2024-08-19T20:22:23.595951Z","steps":["trace[1942552739] 'process raft request'  (duration: 124.795867ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.284709Z","caller":"traceutil/trace.go:171","msg":"trace[597586118] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"182.545127ms","start":"2024-08-19T20:22:24.102019Z","end":"2024-08-19T20:22:24.284564Z","steps":["trace[597586118] 'process raft request'  (duration: 83.917065ms)","trace[597586118] 'compare'  (duration: 83.289589ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:22:24.299677Z","caller":"traceutil/trace.go:171","msg":"trace[1735036322] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"174.940037ms","start":"2024-08-19T20:22:24.124717Z","end":"2024-08-19T20:22:24.299657Z","steps":["trace[1735036322] 'process raft request'  (duration: 144.677744ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.301009Z","caller":"traceutil/trace.go:171","msg":"trace[383884208] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"176.07055ms","start":"2024-08-19T20:22:24.124910Z","end":"2024-08-19T20:22:24.300981Z","steps":["trace[383884208] 'process raft request'  (duration: 144.532638ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.301218Z","caller":"traceutil/trace.go:171","msg":"trace[824901047] linearizableReadLoop","detail":"{readStateIndex:380; appliedIndex:375; }","duration":"175.888372ms","start":"2024-08-19T20:22:24.125305Z","end":"2024-08-19T20:22:24.301193Z","steps":["trace[824901047] 'read index received'  (duration: 9.409254ms)","trace[824901047] 'applied index is now lower than readState.Index'  (duration: 166.477059ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T20:22:24.301730Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.360886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T20:22:24.309405Z","caller":"traceutil/trace.go:171","msg":"trace[1291458707] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:374; }","duration":"184.039025ms","start":"2024-08-19T20:22:24.125344Z","end":"2024-08-19T20:22:24.309383Z","steps":["trace[1291458707] 'agreement among raft nodes before linearized reading'  (duration: 176.320477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.310378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.247796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-08-19T20:22:24.310483Z","caller":"traceutil/trace.go:171","msg":"trace[1568353333] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-6f6b679f8f; range_end:; response_count:1; response_revision:374; }","duration":"122.362306ms","start":"2024-08-19T20:22:24.188097Z","end":"2024-08-19T20:22:24.310459Z","steps":["trace[1568353333] 'agreement among raft nodes before linearized reading'  (duration: 122.220982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.310663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.13259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T20:22:24.310720Z","caller":"traceutil/trace.go:171","msg":"trace[910454742] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:374; }","duration":"185.190649ms","start":"2024-08-19T20:22:24.125520Z","end":"2024-08-19T20:22:24.310710Z","steps":["trace[910454742] 'agreement among raft nodes before linearized reading'  (duration: 185.114489ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.314737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.322307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T20:22:24.314841Z","caller":"traceutil/trace.go:171","msg":"trace[244805817] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:374; }","duration":"189.436432ms","start":"2024-08-19T20:22:24.125394Z","end":"2024-08-19T20:22:24.314830Z","steps":["trace[244805817] 'agreement among raft nodes before linearized reading'  (duration: 189.284302ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.315039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.659085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T20:22:24.315096Z","caller":"traceutil/trace.go:171","msg":"trace[1764991350] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:374; }","duration":"189.71702ms","start":"2024-08-19T20:22:24.125372Z","end":"2024-08-19T20:22:24.315089Z","steps":["trace[1764991350] 'agreement among raft nodes before linearized reading'  (duration: 189.64013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.331941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.054584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2024-08-19T20:22:24.332567Z","caller":"traceutil/trace.go:171","msg":"trace[1607455807] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:374; }","duration":"207.249162ms","start":"2024-08-19T20:22:24.125300Z","end":"2024-08-19T20:22:24.332549Z","steps":["trace[1607455807] 'agreement among raft nodes before linearized reading'  (duration: 205.062919ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.833660Z","caller":"traceutil/trace.go:171","msg":"trace[97244391] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"118.594486ms","start":"2024-08-19T20:22:24.715038Z","end":"2024-08-19T20:22:24.833632Z","steps":["trace[97244391] 'process raft request'  (duration: 23.061091ms)","trace[97244391] 'compare'  (duration: 94.885152ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:22:24.833896Z","caller":"traceutil/trace.go:171","msg":"trace[2075142307] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"118.810337ms","start":"2024-08-19T20:22:24.715077Z","end":"2024-08-19T20:22:24.833887Z","steps":["trace[2075142307] 'process raft request'  (duration: 118.015962ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:23:26.514570Z","caller":"traceutil/trace.go:171","msg":"trace[1521974339] transaction","detail":"{read_only:false; response_revision:964; number_of_response:1; }","duration":"104.149645ms","start":"2024-08-19T20:23:26.410403Z","end":"2024-08-19T20:23:26.514553Z","steps":["trace[1521974339] 'process raft request'  (duration: 32.518792ms)","trace[1521974339] 'compare'  (duration: 71.547891ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:23:26.521828Z","caller":"traceutil/trace.go:171","msg":"trace[1750390916] transaction","detail":"{read_only:false; response_revision:965; number_of_response:1; }","duration":"111.124673ms","start":"2024-08-19T20:23:26.410655Z","end":"2024-08-19T20:23:26.521780Z","steps":["trace[1750390916] 'process raft request'  (duration: 110.665705ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:23:26.522113Z","caller":"traceutil/trace.go:171","msg":"trace[1651835865] transaction","detail":"{read_only:false; response_revision:966; number_of_response:1; }","duration":"111.245426ms","start":"2024-08-19T20:23:26.410859Z","end":"2024-08-19T20:23:26.522105Z","steps":["trace[1651835865] 'process raft request'  (duration: 110.562994ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:29:21 up  4:11,  0 users,  load average: 0.38, 1.12, 2.08
	Linux addons-199708 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] <==
	I0819 20:28:05.754850       1 main.go:299] handling current node
	W0819 20:28:15.243678       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:28:15.243716       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 20:28:15.754930       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:28:15.754968       1 main.go:299] handling current node
	I0819 20:28:25.754981       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:28:25.755017       1 main.go:299] handling current node
	W0819 20:28:30.966437       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:28:30.966474       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 20:28:35.755496       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:28:35.755621       1 main.go:299] handling current node
	I0819 20:28:45.755516       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:28:45.755553       1 main.go:299] handling current node
	W0819 20:28:46.108166       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:28:46.108204       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 20:28:55.755517       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:28:55.755561       1 main.go:299] handling current node
	I0819 20:29:05.754992       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:29:05.755030       1 main.go:299] handling current node
	W0819 20:29:06.350409       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:29:06.350444       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 20:29:13.457315       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:29:13.457351       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 20:29:15.755489       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:29:15.755529       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] <==
	E0819 20:25:00.718417       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56708: use of closed network connection
	E0819 20:25:00.869386       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56724: use of closed network connection
	E0819 20:25:24.215473       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0819 20:25:25.909932       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 20:25:28.218427       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0819 20:25:47.243570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.243708       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.268157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.268289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.300539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.300602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.319762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.319807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.356279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.356923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 20:25:48.319896       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 20:25:48.356627       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 20:25:48.367306       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0819 20:25:55.092159       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.86.69"}
	E0819 20:26:08.422200       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0819 20:26:42.627890       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 20:26:43.686416       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 20:26:48.204612       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 20:26:48.520054       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.183.38"}
	I0819 20:29:10.219063       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.57.173"}
	
	
	==> kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] <==
	W0819 20:28:16.698528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:28:16.698583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:28:18.761104       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:28:18.761150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:28:35.037238       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:28:35.037287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:28:44.751522       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:28:44.751566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:29:00.935823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:00.935869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:29:08.230190       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:08.230230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 20:29:09.969371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.409108ms"
	I0819 20:29:09.979808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.376149ms"
	I0819 20:29:10.022482       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.539453ms"
	I0819 20:29:10.023708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="98.945µs"
	I0819 20:29:11.881511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.071063ms"
	I0819 20:29:11.881653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="96.696µs"
	I0819 20:29:12.550764       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 20:29:12.561098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="9.706µs"
	I0819 20:29:12.566973       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0819 20:29:18.612845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:18.612889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:29:20.043993       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:20.044060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] <==
	I0819 20:22:23.616929       1 server_linux.go:66] "Using iptables proxy"
	I0819 20:22:25.762157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 20:22:25.762238       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 20:22:26.012653       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 20:22:26.012815       1 server_linux.go:169] "Using iptables Proxier"
	I0819 20:22:26.015461       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 20:22:26.016145       1 server.go:483] "Version info" version="v1.31.0"
	I0819 20:22:26.016222       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:22:26.027984       1 config.go:197] "Starting service config controller"
	I0819 20:22:26.028082       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 20:22:26.028135       1 config.go:104] "Starting endpoint slice config controller"
	I0819 20:22:26.028165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 20:22:26.028700       1 config.go:326] "Starting node config controller"
	I0819 20:22:26.028759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 20:22:26.134197       1 shared_informer.go:320] Caches are synced for node config
	I0819 20:22:26.134362       1 shared_informer.go:320] Caches are synced for service config
	I0819 20:22:26.134433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] <==
	W0819 20:22:13.217242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:22:13.217260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:22:13.217425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 20:22:13.217501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 20:22:13.217696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 20:22:13.217750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 20:22:13.217821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 20:22:13.217879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 20:22:13.217937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.218046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 20:22:13.218064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 20:22:14.805078       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.352019    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18bbf659-adcd-4f3c-8a24-47c9af3dcf74-kube-api-access-ps9s4" (OuterVolumeSpecName: "kube-api-access-ps9s4") pod "18bbf659-adcd-4f3c-8a24-47c9af3dcf74" (UID: "18bbf659-adcd-4f3c-8a24-47c9af3dcf74"). InnerVolumeSpecName "kube-api-access-ps9s4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.445803    1507 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ps9s4\" (UniqueName: \"kubernetes.io/projected/18bbf659-adcd-4f3c-8a24-47c9af3dcf74-kube-api-access-ps9s4\") on node \"addons-199708\" DevicePath \"\""
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.848683    1507 scope.go:117] "RemoveContainer" containerID="c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885"
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.879202    1507 scope.go:117] "RemoveContainer" containerID="c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885"
	Aug 19 20:29:11 addons-199708 kubelet[1507]: E0819 20:29:11.879819    1507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885\": container with ID starting with c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885 not found: ID does not exist" containerID="c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885"
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.879862    1507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885"} err="failed to get container status \"c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885\": rpc error: code = NotFound desc = could not find container \"c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885\": container with ID starting with c31cb1ef6d0bc963ef351ff119af0b5f6f00c36b13eff04e47c307282a278885 not found: ID does not exist"
	Aug 19 20:29:11 addons-199708 kubelet[1507]: I0819 20:29:11.882710    1507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-r2xfn" podStartSLOduration=1.935749183 podStartE2EDuration="2.882691644s" podCreationTimestamp="2024-08-19 20:29:09 +0000 UTC" firstStartedPulling="2024-08-19 20:29:10.355419988 +0000 UTC m=+415.613309042" lastFinishedPulling="2024-08-19 20:29:11.302362441 +0000 UTC m=+416.560251503" observedRunningTime="2024-08-19 20:29:11.861169538 +0000 UTC m=+417.119058600" watchObservedRunningTime="2024-08-19 20:29:11.882691644 +0000 UTC m=+417.140580698"
	Aug 19 20:29:12 addons-199708 kubelet[1507]: I0819 20:29:12.876438    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18bbf659-adcd-4f3c-8a24-47c9af3dcf74" path="/var/lib/kubelet/pods/18bbf659-adcd-4f3c-8a24-47c9af3dcf74/volumes"
	Aug 19 20:29:12 addons-199708 kubelet[1507]: I0819 20:29:12.877960    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b7c0a79-70d5-44c3-8642-0e1c9d3ec6df" path="/var/lib/kubelet/pods/6b7c0a79-70d5-44c3-8642-0e1c9d3ec6df/volumes"
	Aug 19 20:29:12 addons-199708 kubelet[1507]: I0819 20:29:12.878341    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c583e9bc-0b24-4c8f-83b8-c11c754e7f2e" path="/var/lib/kubelet/pods/c583e9bc-0b24-4c8f-83b8-c11c754e7f2e/volumes"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: E0819 20:29:15.148747    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099355148428137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: E0819 20:29:15.148791    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099355148428137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.352290    1507 scope.go:117] "RemoveContainer" containerID="d9bfd7736b9dabd464fa84de8cca653adb7cfd4dfd18c3574350752d79d3f844"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.372896    1507 scope.go:117] "RemoveContainer" containerID="478c1daa83ca245177d426c647beca8526cf3394e2b2cf63cd542fa59b5b7ece"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.859103    1507 scope.go:117] "RemoveContainer" containerID="7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.871610    1507 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-webhook-cert\") pod \"f4d56a1d-6a1c-4eef-8328-e7af16f27b45\" (UID: \"f4d56a1d-6a1c-4eef-8328-e7af16f27b45\") "
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.871668    1507 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxsf\" (UniqueName: \"kubernetes.io/projected/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-kube-api-access-tvxsf\") pod \"f4d56a1d-6a1c-4eef-8328-e7af16f27b45\" (UID: \"f4d56a1d-6a1c-4eef-8328-e7af16f27b45\") "
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.875612    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f4d56a1d-6a1c-4eef-8328-e7af16f27b45" (UID: "f4d56a1d-6a1c-4eef-8328-e7af16f27b45"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.875737    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-kube-api-access-tvxsf" (OuterVolumeSpecName: "kube-api-access-tvxsf") pod "f4d56a1d-6a1c-4eef-8328-e7af16f27b45" (UID: "f4d56a1d-6a1c-4eef-8328-e7af16f27b45"). InnerVolumeSpecName "kube-api-access-tvxsf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.877293    1507 scope.go:117] "RemoveContainer" containerID="7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: E0819 20:29:15.877828    1507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8\": container with ID starting with 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8 not found: ID does not exist" containerID="7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.877876    1507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8"} err="failed to get container status \"7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8\": rpc error: code = NotFound desc = could not find container \"7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8\": container with ID starting with 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8 not found: ID does not exist"
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.972072    1507 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-webhook-cert\") on node \"addons-199708\" DevicePath \"\""
	Aug 19 20:29:15 addons-199708 kubelet[1507]: I0819 20:29:15.972112    1507 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tvxsf\" (UniqueName: \"kubernetes.io/projected/f4d56a1d-6a1c-4eef-8328-e7af16f27b45-kube-api-access-tvxsf\") on node \"addons-199708\" DevicePath \"\""
	Aug 19 20:29:16 addons-199708 kubelet[1507]: I0819 20:29:16.876745    1507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d56a1d-6a1c-4eef-8328-e7af16f27b45" path="/var/lib/kubelet/pods/f4d56a1d-6a1c-4eef-8328-e7af16f27b45/volumes"
	
	
	==> storage-provisioner [572c16f1729497f7d94a754227c1c93424bdd957aa01d16528e4a906865cb8df] <==
	I0819 20:23:07.068670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 20:23:07.084179       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 20:23:07.084307       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 20:23:07.093879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 20:23:07.094187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069!
	I0819 20:23:07.094289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f018fd16-44f4-43d0-9569-d72317f64d49", APIVersion:"v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069 became leader
	I0819 20:23:07.197770       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-199708 -n addons-199708
helpers_test.go:261: (dbg) Run:  kubectl --context addons-199708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (321.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.230091ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-phnbr" [9ff0d452-fc9c-4259-bc8e-032f3ad5350a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004482893s
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (118.138469ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 3m58.107211767s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (104.479607ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m0.183958909s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (90.044685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m2.973874296s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (105.326763ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m9.610703089s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (95.34717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m19.852101024s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (87.038809ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m35.390781048s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (91.68539ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 4m48.600012426s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (92.941288ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 5m9.876136403s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (87.129492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 5m37.227114325s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (86.408331ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 6m47.302991894s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (87.167428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 7m19.927880443s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (100.961648ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 8m8.320684339s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-199708 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-199708 top pods -n kube-system: exit status 1 (88.853585ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-6n4mb, age: 9m9.938780397s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-199708
helpers_test.go:235: (dbg) docker inspect addons-199708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce",
	        "Created": "2024-08-19T20:21:51.051185979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1012739,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T20:21:51.219238715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/hostname",
	        "HostsPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/hosts",
	        "LogPath": "/var/lib/docker/containers/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce/be074196787c441acb1e4e3688132394ef9ece5e3c58835e5695543db41d4bce-json.log",
	        "Name": "/addons-199708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-199708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-199708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0-init/diff:/var/lib/docker/overlay2/9477ca3f94c975b8a19e34c7e6e216a8aaa21d9134153e903eb7147c449f54f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50ebe248ba8ff8c5027153884c818110827697138b3f856747769ed250ca28e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-199708",
	                "Source": "/var/lib/docker/volumes/addons-199708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-199708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-199708",
	                "name.minikube.sigs.k8s.io": "addons-199708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95f05de878bc3f5c51f783c6d692670d63dbaa5d2bcaca44505ae6ea419adcd3",
	            "SandboxKey": "/var/run/docker/netns/95f05de878bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-199708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a244ccc75e1fe53096da053ca8b3a7ce793a2735388b362ea1751023a3492c18",
	                    "EndpointID": "391283ea82d8c3c176cb8eae0c738159e9779721da593ca6147cd8d7e6205e01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-199708",
	                        "be074196787c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-199708 -n addons-199708
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 logs -n 25: (1.485854246s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-295909 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | download-docker-295909                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-295909                                                                   | download-docker-295909 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-526736   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | binary-mirror-526736                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38541                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-526736                                                                     | binary-mirror-526736   | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-199708 --wait=true                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-199708 ip                                                                            | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-199708 addons                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | -p addons-199708                                                                            |                        |         |         |                     |                     |
	| addons  | addons-199708 addons                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-199708 ssh cat                                                                       | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | /opt/local-path-provisioner/pvc-da75018b-e55e-4bcd-afd0-fef3a5381dbe_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:25 UTC | 19 Aug 24 20:25 UTC |
	|         | -p addons-199708                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC | 19 Aug 24 20:26 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC | 19 Aug 24 20:26 UTC |
	|         | addons-199708                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-199708 ssh curl -s                                                                   | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-199708 ip                                                                            | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-199708 addons disable                                                                | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:29 UTC | 19 Aug 24 20:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-199708 addons                                                                        | addons-199708          | jenkins | v1.33.1 | 19 Aug 24 20:31 UTC | 19 Aug 24 20:31 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:26.429759 1012241 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:26.429924 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:26.429935 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:26.429940 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:26.430197 1012241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:21:26.430635 1012241 out.go:352] Setting JSON to false
	I0819 20:21:26.431503 1012241 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14628,"bootTime":1724084259,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:21:26.431578 1012241 start.go:139] virtualization:  
	I0819 20:21:26.434266 1012241 out.go:177] * [addons-199708] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:21:26.436128 1012241 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:21:26.436319 1012241 notify.go:220] Checking for updates...
	I0819 20:21:26.441184 1012241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:26.443237 1012241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:21:26.445245 1012241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:21:26.447156 1012241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:21:26.449128 1012241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:21:26.451618 1012241 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:26.482653 1012241 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:26.482772 1012241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:26.537352 1012241 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:26.527799225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:26.537484 1012241 docker.go:307] overlay module found
	I0819 20:21:26.539067 1012241 out.go:177] * Using the docker driver based on user configuration
	I0819 20:21:26.540373 1012241 start.go:297] selected driver: docker
	I0819 20:21:26.540387 1012241 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:26.540417 1012241 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:21:26.541069 1012241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:26.595215 1012241 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 20:21:26.585358474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:26.595412 1012241 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:26.595695 1012241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:21:26.597034 1012241 out.go:177] * Using Docker driver with root privileges
	I0819 20:21:26.598283 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:21:26.598325 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:26.598355 1012241 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:26.598463 1012241 start.go:340] cluster config:
	{Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:26.600294 1012241 out.go:177] * Starting "addons-199708" primary control-plane node in "addons-199708" cluster
	I0819 20:21:26.601718 1012241 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:21:26.603152 1012241 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:26.605033 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:26.605106 1012241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 20:21:26.605121 1012241 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:26.605125 1012241 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:26.605207 1012241 preload.go:172] Found /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 20:21:26.605217 1012241 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:21:26.605561 1012241 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json ...
	I0819 20:21:26.605584 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json: {Name:mk4982306a6c220b260448cb6dfbfeaf94699ae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:26.621094 1012241 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:26.621236 1012241 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:26.621262 1012241 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:21:26.621270 1012241 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:21:26.621279 1012241 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:21:26.621290 1012241 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:21:43.932251 1012241 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:21:43.932292 1012241 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:21:43.932350 1012241 start.go:360] acquireMachinesLock for addons-199708: {Name:mk6c9c0160326aa0c0af4593d4c9c99fe90593b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:21:43.932983 1012241 start.go:364] duration metric: took 604.181µs to acquireMachinesLock for "addons-199708"
	I0819 20:21:43.933025 1012241 start.go:93] Provisioning new machine with config: &{Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:21:43.933112 1012241 start.go:125] createHost starting for "" (driver="docker")
	I0819 20:21:43.935165 1012241 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 20:21:43.935406 1012241 start.go:159] libmachine.API.Create for "addons-199708" (driver="docker")
	I0819 20:21:43.935442 1012241 client.go:168] LocalClient.Create starting
	I0819 20:21:43.935549 1012241 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem
	I0819 20:21:44.250894 1012241 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem
	I0819 20:21:44.795690 1012241 cli_runner.go:164] Run: docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 20:21:44.810659 1012241 cli_runner.go:211] docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 20:21:44.810747 1012241 network_create.go:284] running [docker network inspect addons-199708] to gather additional debugging logs...
	I0819 20:21:44.810769 1012241 cli_runner.go:164] Run: docker network inspect addons-199708
	W0819 20:21:44.826106 1012241 cli_runner.go:211] docker network inspect addons-199708 returned with exit code 1
	I0819 20:21:44.826139 1012241 network_create.go:287] error running [docker network inspect addons-199708]: docker network inspect addons-199708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-199708 not found
	I0819 20:21:44.826152 1012241 network_create.go:289] output of [docker network inspect addons-199708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-199708 not found
	
	** /stderr **
	I0819 20:21:44.826254 1012241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:44.841850 1012241 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004ae0e0}
	I0819 20:21:44.841894 1012241 network_create.go:124] attempt to create docker network addons-199708 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 20:21:44.841949 1012241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-199708 addons-199708
	I0819 20:21:44.917828 1012241 network_create.go:108] docker network addons-199708 192.168.49.0/24 created
	I0819 20:21:44.917862 1012241 kic.go:121] calculated static IP "192.168.49.2" for the "addons-199708" container
	I0819 20:21:44.917937 1012241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 20:21:44.932136 1012241 cli_runner.go:164] Run: docker volume create addons-199708 --label name.minikube.sigs.k8s.io=addons-199708 --label created_by.minikube.sigs.k8s.io=true
	I0819 20:21:44.947807 1012241 oci.go:103] Successfully created a docker volume addons-199708
	I0819 20:21:44.947901 1012241 cli_runner.go:164] Run: docker run --rm --name addons-199708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --entrypoint /usr/bin/test -v addons-199708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 20:21:46.932792 1012241 cli_runner.go:217] Completed: docker run --rm --name addons-199708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --entrypoint /usr/bin/test -v addons-199708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.984844761s)
	I0819 20:21:46.932834 1012241 oci.go:107] Successfully prepared a docker volume addons-199708
	I0819 20:21:46.932859 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:46.932878 1012241 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 20:21:46.932974 1012241 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 20:21:50.979674 1012241 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-199708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.046656802s)
	I0819 20:21:50.979708 1012241 kic.go:203] duration metric: took 4.04682627s to extract preloaded images to volume ...
	W0819 20:21:50.979843 1012241 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 20:21:50.979961 1012241 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 20:21:51.035315 1012241 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-199708 --name addons-199708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-199708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-199708 --network addons-199708 --ip 192.168.49.2 --volume addons-199708:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 20:21:51.376885 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Running}}
	I0819 20:21:51.401666 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:51.429278 1012241 cli_runner.go:164] Run: docker exec addons-199708 stat /var/lib/dpkg/alternatives/iptables
	I0819 20:21:51.491656 1012241 oci.go:144] the created container "addons-199708" has a running status.
	I0819 20:21:51.491690 1012241 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa...
	I0819 20:21:52.279538 1012241 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 20:21:52.298675 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:52.317324 1012241 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 20:21:52.317350 1012241 kic_runner.go:114] Args: [docker exec --privileged addons-199708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 20:21:52.385754 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:21:52.407521 1012241 machine.go:93] provisionDockerMachine start ...
	I0819 20:21:52.407619 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.425983 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.426266 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.426282 1012241 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:21:52.557270 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-199708
	
	I0819 20:21:52.557341 1012241 ubuntu.go:169] provisioning hostname "addons-199708"
	I0819 20:21:52.557450 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.574183 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.574432 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.574451 1012241 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-199708 && echo "addons-199708" | sudo tee /etc/hostname
	I0819 20:21:52.718325 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-199708
	
	I0819 20:21:52.718408 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:52.736630 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:52.736892 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:52.736917 1012241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-199708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-199708/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-199708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:21:52.866022 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:21:52.866055 1012241 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1006087/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1006087/.minikube}
	I0819 20:21:52.866081 1012241 ubuntu.go:177] setting up certificates
	I0819 20:21:52.866092 1012241 provision.go:84] configureAuth start
	I0819 20:21:52.866158 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:52.883238 1012241 provision.go:143] copyHostCerts
	I0819 20:21:52.883324 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem (1123 bytes)
	I0819 20:21:52.883446 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem (1675 bytes)
	I0819 20:21:52.883505 1012241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem (1082 bytes)
	I0819 20:21:52.883557 1012241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem org=jenkins.addons-199708 san=[127.0.0.1 192.168.49.2 addons-199708 localhost minikube]
	I0819 20:21:53.382790 1012241 provision.go:177] copyRemoteCerts
	I0819 20:21:53.382857 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:21:53.382927 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.399683 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:53.494692 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 20:21:53.519713 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:21:53.544807 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 20:21:53.570081 1012241 provision.go:87] duration metric: took 703.974684ms to configureAuth
	I0819 20:21:53.570115 1012241 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:21:53.570336 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:21:53.570462 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.586760 1012241 main.go:141] libmachine: Using SSH client type: native
	I0819 20:21:53.587008 1012241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I0819 20:21:53.587029 1012241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:21:53.822303 1012241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:21:53.822330 1012241 machine.go:96] duration metric: took 1.414788015s to provisionDockerMachine
	I0819 20:21:53.822340 1012241 client.go:171] duration metric: took 9.886889796s to LocalClient.Create
	I0819 20:21:53.822393 1012241 start.go:167] duration metric: took 9.886987084s to libmachine.API.Create "addons-199708"
	I0819 20:21:53.822407 1012241 start.go:293] postStartSetup for "addons-199708" (driver="docker")
	I0819 20:21:53.822418 1012241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:21:53.822527 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:21:53.822590 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.840701 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:53.935316 1012241 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:21:53.938664 1012241 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:21:53.938702 1012241 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:21:53.938716 1012241 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:21:53.938723 1012241 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:21:53.938735 1012241 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/addons for local assets ...
	I0819 20:21:53.938807 1012241 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/files for local assets ...
	I0819 20:21:53.938832 1012241 start.go:296] duration metric: took 116.418837ms for postStartSetup
	I0819 20:21:53.939164 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:53.955331 1012241 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/config.json ...
	I0819 20:21:53.955643 1012241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:21:53.955712 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:53.972244 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.063316 1012241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:21:54.068821 1012241 start.go:128] duration metric: took 10.135692886s to createHost
	I0819 20:21:54.068845 1012241 start.go:83] releasing machines lock for "addons-199708", held for 10.135840996s
	I0819 20:21:54.068929 1012241 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-199708
	I0819 20:21:54.086194 1012241 ssh_runner.go:195] Run: cat /version.json
	I0819 20:21:54.086252 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:54.086332 1012241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:21:54.086408 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:21:54.109233 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.119601 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:21:54.330914 1012241 ssh_runner.go:195] Run: systemctl --version
	I0819 20:21:54.335254 1012241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:21:54.477169 1012241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:21:54.482263 1012241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:21:54.504353 1012241 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:21:54.504451 1012241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:21:54.542327 1012241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 20:21:54.542350 1012241 start.go:495] detecting cgroup driver to use...
	I0819 20:21:54.542386 1012241 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:21:54.542449 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:21:54.558492 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:21:54.569776 1012241 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:21:54.569876 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:21:54.583585 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:21:54.598293 1012241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:21:54.679835 1012241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:21:54.780885 1012241 docker.go:233] disabling docker service ...
	I0819 20:21:54.780998 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:21:54.802942 1012241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:21:54.815692 1012241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:21:54.899204 1012241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:21:54.988805 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:21:55.001351 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:21:55.041134 1012241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:21:55.041214 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.053990 1012241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:21:55.054124 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.065775 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.077839 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.090609 1012241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:21:55.101947 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.113984 1012241 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.132795 1012241 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:21:55.143464 1012241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:21:55.152717 1012241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:21:55.161673 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:55.243931 1012241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:21:55.365971 1012241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:21:55.366103 1012241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:21:55.369933 1012241 start.go:563] Will wait 60s for crictl version
	I0819 20:21:55.370049 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:21:55.373677 1012241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:21:55.418660 1012241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 20:21:55.418817 1012241 ssh_runner.go:195] Run: crio --version
	I0819 20:21:55.462160 1012241 ssh_runner.go:195] Run: crio --version
	I0819 20:21:55.504151 1012241 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 20:21:55.506896 1012241 cli_runner.go:164] Run: docker network inspect addons-199708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:21:55.522935 1012241 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:21:55.526662 1012241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:55.537914 1012241 kubeadm.go:883] updating cluster {Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:21:55.538045 1012241 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:55.538106 1012241 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:55.614772 1012241 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:21:55.614798 1012241 crio.go:433] Images already preloaded, skipping extraction
	I0819 20:21:55.614863 1012241 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:21:55.658355 1012241 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:21:55.658378 1012241 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:21:55.658388 1012241 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 20:21:55.658494 1012241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-199708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:21:55.658578 1012241 ssh_runner.go:195] Run: crio config
	I0819 20:21:55.706254 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:21:55.706278 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:55.706291 1012241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:21:55.706343 1012241 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-199708 NodeName:addons-199708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:21:55.706512 1012241 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-199708"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:21:55.706586 1012241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:21:55.715428 1012241 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:21:55.715526 1012241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 20:21:55.724240 1012241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 20:21:55.742720 1012241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:21:55.760589 1012241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 20:21:55.779008 1012241 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 20:21:55.782677 1012241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:21:55.793406 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:21:55.882133 1012241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:21:55.896172 1012241 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708 for IP: 192.168.49.2
	I0819 20:21:55.896241 1012241 certs.go:194] generating shared ca certs ...
	I0819 20:21:55.896274 1012241 certs.go:226] acquiring lock for ca certs: {Name:mka0619a4a0da3f790025b70d844d99358d748e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:55.896435 1012241 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key
	I0819 20:21:56.308101 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt ...
	I0819 20:21:56.308137 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt: {Name:mk16233753a16be3afb1d9ab0b22ac21b265489c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:56.308798 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key ...
	I0819 20:21:56.308815 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key: {Name:mke7a5da15253b7a448fe87628f984b4e0e6c17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:56.308911 1012241 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key
	I0819 20:21:57.087300 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt ...
	I0819 20:21:57.087332 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt: {Name:mk107d2da75913e05f292352bb957802fe834044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.087991 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key ...
	I0819 20:21:57.088045 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key: {Name:mk4dd8e1a67e76977a6797072eacca1d96cb43c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.092231 1012241 certs.go:256] generating profile certs ...
	I0819 20:21:57.092394 1012241 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key
	I0819 20:21:57.092432 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt with IP's: []
	I0819 20:21:57.771472 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt ...
	I0819 20:21:57.771509 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: {Name:mk1c2d35b33baec32c6203a2c13726cd6d4387a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.774483 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key ...
	I0819 20:21:57.774507 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.key: {Name:mke10ffeaf88ab9095075d3e1e57386d96745e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.775070 1012241 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0
	I0819 20:21:57.775094 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 20:21:57.969392 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 ...
	I0819 20:21:57.969425 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0: {Name:mkb0faeb8ba4b2865b55c94e3f37afd3dd19a23b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.969642 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0 ...
	I0819 20:21:57.969659 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0: {Name:mk4a9645d963042e194024d45aa216d82aed2544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:57.969754 1012241 certs.go:381] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt.8b4edbc0 -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt
	I0819 20:21:57.969833 1012241 certs.go:385] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key.8b4edbc0 -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key
	I0819 20:21:57.969889 1012241 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key
	I0819 20:21:57.969906 1012241 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt with IP's: []
	I0819 20:21:58.812910 1012241 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt ...
	I0819 20:21:58.812945 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt: {Name:mkc1d5c82a652223a3d7b19df127f6a13fd3a426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:58.813134 1012241 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key ...
	I0819 20:21:58.813149 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key: {Name:mk3cd1f973f125beee0d9d76964cb35efde0f800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:21:58.822382 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 20:21:58.822457 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:21:58.822485 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:21:58.822513 1012241 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem (1675 bytes)
	I0819 20:21:58.823218 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:21:58.849151 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:21:58.875664 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:21:58.900757 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:21:58.925956 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 20:21:58.952179 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:21:58.978023 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:21:59.005518 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:21:59.031576 1012241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:21:59.056910 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:21:59.075851 1012241 ssh_runner.go:195] Run: openssl version
	I0819 20:21:59.081683 1012241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:21:59.091679 1012241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.095435 1012241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.095526 1012241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:21:59.102804 1012241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:21:59.112617 1012241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:21:59.116165 1012241 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 20:21:59.116240 1012241 kubeadm.go:392] StartCluster: {Name:addons-199708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-199708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:59.116333 1012241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:21:59.116397 1012241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:21:59.154981 1012241 cri.go:89] found id: ""
	I0819 20:21:59.155053 1012241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:21:59.164138 1012241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 20:21:59.173169 1012241 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 20:21:59.173258 1012241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 20:21:59.182444 1012241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 20:21:59.182467 1012241 kubeadm.go:157] found existing configuration files:
	
	I0819 20:21:59.182525 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 20:21:59.191920 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 20:21:59.192012 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 20:21:59.200366 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 20:21:59.209291 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 20:21:59.209379 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 20:21:59.218057 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 20:21:59.226885 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 20:21:59.226953 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 20:21:59.235710 1012241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 20:21:59.244978 1012241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 20:21:59.245075 1012241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 20:21:59.254157 1012241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 20:21:59.298227 1012241 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 20:21:59.298564 1012241 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 20:21:59.316122 1012241 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 20:21:59.316195 1012241 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 20:21:59.316237 1012241 kubeadm.go:310] OS: Linux
	I0819 20:21:59.316288 1012241 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 20:21:59.316337 1012241 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 20:21:59.316387 1012241 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 20:21:59.316438 1012241 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 20:21:59.316488 1012241 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 20:21:59.316538 1012241 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 20:21:59.316585 1012241 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 20:21:59.316634 1012241 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 20:21:59.316683 1012241 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 20:21:59.384211 1012241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 20:21:59.384333 1012241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 20:21:59.384426 1012241 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 20:21:59.391216 1012241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 20:21:59.394946 1012241 out.go:235]   - Generating certificates and keys ...
	I0819 20:21:59.395051 1012241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 20:21:59.395120 1012241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 20:21:59.496591 1012241 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 20:22:00.087810 1012241 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 20:22:00.855824 1012241 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 20:22:01.406449 1012241 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 20:22:01.944783 1012241 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 20:22:01.945121 1012241 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-199708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:22:03.327965 1012241 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 20:22:03.328316 1012241 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-199708 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 20:22:03.517207 1012241 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 20:22:03.905872 1012241 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 20:22:04.317779 1012241 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 20:22:04.318029 1012241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 20:22:04.834976 1012241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 20:22:05.248754 1012241 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 20:22:05.609469 1012241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 20:22:05.998263 1012241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 20:22:06.485971 1012241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 20:22:06.486808 1012241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 20:22:06.489850 1012241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 20:22:06.492955 1012241 out.go:235]   - Booting up control plane ...
	I0819 20:22:06.493066 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 20:22:06.493142 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 20:22:06.493207 1012241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 20:22:06.507389 1012241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 20:22:06.515693 1012241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 20:22:06.516095 1012241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 20:22:06.613581 1012241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 20:22:06.613723 1012241 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 20:22:08.114935 1012241 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501535655s
	I0819 20:22:08.115026 1012241 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 20:22:14.118330 1012241 kubeadm.go:310] [api-check] The API server is healthy after 6.001276124s
	I0819 20:22:14.137315 1012241 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 20:22:14.151167 1012241 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 20:22:14.178434 1012241 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 20:22:14.178676 1012241 kubeadm.go:310] [mark-control-plane] Marking the node addons-199708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 20:22:14.190646 1012241 kubeadm.go:310] [bootstrap-token] Using token: 2z756t.aqpurkuidy5qgcsv
	I0819 20:22:14.193471 1012241 out.go:235]   - Configuring RBAC rules ...
	I0819 20:22:14.193649 1012241 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 20:22:14.199099 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 20:22:14.207161 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 20:22:14.211809 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 20:22:14.217831 1012241 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 20:22:14.223758 1012241 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 20:22:14.526289 1012241 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 20:22:14.975063 1012241 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 20:22:15.524712 1012241 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 20:22:15.525947 1012241 kubeadm.go:310] 
	I0819 20:22:15.526022 1012241 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 20:22:15.526028 1012241 kubeadm.go:310] 
	I0819 20:22:15.526103 1012241 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 20:22:15.526108 1012241 kubeadm.go:310] 
	I0819 20:22:15.526133 1012241 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 20:22:15.526190 1012241 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 20:22:15.526240 1012241 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 20:22:15.526244 1012241 kubeadm.go:310] 
	I0819 20:22:15.526296 1012241 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 20:22:15.526301 1012241 kubeadm.go:310] 
	I0819 20:22:15.526358 1012241 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 20:22:15.526364 1012241 kubeadm.go:310] 
	I0819 20:22:15.526415 1012241 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 20:22:15.526487 1012241 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 20:22:15.526553 1012241 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 20:22:15.526558 1012241 kubeadm.go:310] 
	I0819 20:22:15.526640 1012241 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 20:22:15.526714 1012241 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 20:22:15.526719 1012241 kubeadm.go:310] 
	I0819 20:22:15.526807 1012241 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2z756t.aqpurkuidy5qgcsv \
	I0819 20:22:15.526908 1012241 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90d2106fd3f826fb0274ca14be0cbc03f42e5b76c699b68b73c6c89fab9fb6bb \
	I0819 20:22:15.526928 1012241 kubeadm.go:310] 	--control-plane 
	I0819 20:22:15.526933 1012241 kubeadm.go:310] 
	I0819 20:22:15.527015 1012241 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 20:22:15.527020 1012241 kubeadm.go:310] 
	I0819 20:22:15.527099 1012241 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2z756t.aqpurkuidy5qgcsv \
	I0819 20:22:15.527198 1012241 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:90d2106fd3f826fb0274ca14be0cbc03f42e5b76c699b68b73c6c89fab9fb6bb 
	I0819 20:22:15.531296 1012241 kubeadm.go:310] W0819 20:21:59.289952    1194 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:15.531593 1012241 kubeadm.go:310] W0819 20:21:59.295526    1194 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 20:22:15.531802 1012241 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 20:22:15.531908 1012241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 20:22:15.531931 1012241 cni.go:84] Creating CNI manager for ""
	I0819 20:22:15.531943 1012241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:22:15.536973 1012241 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 20:22:15.539505 1012241 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 20:22:15.543597 1012241 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 20:22:15.543622 1012241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 20:22:15.562706 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 20:22:15.839514 1012241 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 20:22:15.839666 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:15.839761 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-199708 minikube.k8s.io/updated_at=2024_08_19T20_22_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8 minikube.k8s.io/name=addons-199708 minikube.k8s.io/primary=true
	I0819 20:22:15.965860 1012241 ops.go:34] apiserver oom_adj: -16
	I0819 20:22:15.965954 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.466736 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:16.966851 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:17.467012 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:17.966269 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:18.466557 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:18.966834 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:19.466447 1012241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 20:22:19.609943 1012241 kubeadm.go:1113] duration metric: took 3.770333958s to wait for elevateKubeSystemPrivileges
	I0819 20:22:19.609969 1012241 kubeadm.go:394] duration metric: took 20.493757545s to StartCluster
	I0819 20:22:19.609985 1012241 settings.go:142] acquiring lock: {Name:mk3a0c8d8afbf5cfbc8b518d1bda35579f7cba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:19.610724 1012241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:22:19.611135 1012241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/kubeconfig: {Name:mk82300af76d6335c7b97db5d9d0a0f9960b80de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:22:19.611373 1012241 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:22:19.611502 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 20:22:19.611787 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:22:19.611817 1012241 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 20:22:19.611895 1012241 addons.go:69] Setting yakd=true in profile "addons-199708"
	I0819 20:22:19.611917 1012241 addons.go:234] Setting addon yakd=true in "addons-199708"
	I0819 20:22:19.611942 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.612429 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.613083 1012241 addons.go:69] Setting inspektor-gadget=true in profile "addons-199708"
	I0819 20:22:19.613120 1012241 addons.go:234] Setting addon inspektor-gadget=true in "addons-199708"
	I0819 20:22:19.613150 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.613585 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.614104 1012241 addons.go:69] Setting cloud-spanner=true in profile "addons-199708"
	I0819 20:22:19.614138 1012241 addons.go:234] Setting addon cloud-spanner=true in "addons-199708"
	I0819 20:22:19.614165 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.614564 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.621668 1012241 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-199708"
	I0819 20:22:19.621749 1012241 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-199708"
	I0819 20:22:19.621782 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.622234 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.622641 1012241 addons.go:69] Setting metrics-server=true in profile "addons-199708"
	I0819 20:22:19.622727 1012241 addons.go:234] Setting addon metrics-server=true in "addons-199708"
	I0819 20:22:19.622813 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.623983 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.626922 1012241 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-199708"
	I0819 20:22:19.626970 1012241 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-199708"
	I0819 20:22:19.627008 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.627440 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.628927 1012241 addons.go:69] Setting default-storageclass=true in profile "addons-199708"
	I0819 20:22:19.648481 1012241 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-199708"
	I0819 20:22:19.648878 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.629118 1012241 addons.go:69] Setting gcp-auth=true in profile "addons-199708"
	I0819 20:22:19.657796 1012241 mustload.go:65] Loading cluster: addons-199708
	I0819 20:22:19.658011 1012241 config.go:182] Loaded profile config "addons-199708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:22:19.658325 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.629132 1012241 addons.go:69] Setting ingress=true in profile "addons-199708"
	I0819 20:22:19.673992 1012241 addons.go:234] Setting addon ingress=true in "addons-199708"
	I0819 20:22:19.629140 1012241 addons.go:69] Setting ingress-dns=true in profile "addons-199708"
	I0819 20:22:19.674945 1012241 addons.go:234] Setting addon ingress-dns=true in "addons-199708"
	I0819 20:22:19.675002 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.629363 1012241 out.go:177] * Verifying Kubernetes components...
	I0819 20:22:19.648342 1012241 addons.go:69] Setting registry=true in profile "addons-199708"
	I0819 20:22:19.648357 1012241 addons.go:69] Setting storage-provisioner=true in profile "addons-199708"
	I0819 20:22:19.648364 1012241 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-199708"
	I0819 20:22:19.648368 1012241 addons.go:69] Setting volcano=true in profile "addons-199708"
	I0819 20:22:19.648380 1012241 addons.go:69] Setting volumesnapshots=true in profile "addons-199708"
	I0819 20:22:19.674863 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.675644 1012241 addons.go:234] Setting addon registry=true in "addons-199708"
	I0819 20:22:19.677627 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.683167 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.684437 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.683363 1012241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:22:19.676150 1012241 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-199708"
	I0819 20:22:19.687745 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.690357 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.676165 1012241 addons.go:234] Setting addon volcano=true in "addons-199708"
	I0819 20:22:19.717541 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.718184 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.676176 1012241 addons.go:234] Setting addon volumesnapshots=true in "addons-199708"
	I0819 20:22:19.730153 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.730886 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.675924 1012241 addons.go:234] Setting addon storage-provisioner=true in "addons-199708"
	I0819 20:22:19.766735 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.767421 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.776007 1012241 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 20:22:19.776315 1012241 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 20:22:19.776479 1012241 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 20:22:19.781721 1012241 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:19.783614 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 20:22:19.783695 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.795194 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 20:22:19.795267 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 20:22:19.795355 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.799388 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 20:22:19.799776 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 20:22:19.799792 1012241 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 20:22:19.799868 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.819047 1012241 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 20:22:19.824377 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 20:22:19.824410 1012241 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 20:22:19.824449 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 20:22:19.824506 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.837735 1012241 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 20:22:19.862617 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 20:22:19.868609 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 20:22:19.872329 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 20:22:19.877789 1012241 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:19.877815 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 20:22:19.877888 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.879520 1012241 addons.go:234] Setting addon default-storageclass=true in "addons-199708"
	I0819 20:22:19.879556 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.879970 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:19.921655 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 20:22:19.924490 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 20:22:19.933742 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.945667 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 20:22:19.948322 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 20:22:19.948356 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 20:22:19.948426 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.953495 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 20:22:19.954637 1012241 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 20:22:19.960283 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 20:22:19.963080 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 20:22:19.963112 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 20:22:19.963208 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.979533 1012241 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:19.979569 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 20:22:19.979647 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:19.994554 1012241 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-199708"
	I0819 20:22:19.994608 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:19.995106 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:20.016607 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 20:22:20.022699 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:20.024836 1012241 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 20:22:20.029513 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 20:22:20.029559 1012241 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 20:22:20.029685 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:20.029856 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	W0819 20:22:20.051946 1012241 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 20:22:20.052599 1012241 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:20.052618 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 20:22:20.052681 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.053158 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.082190 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 20:22:20.082851 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.083700 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.085216 1012241 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:20.085237 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 20:22:20.085310 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.129221 1012241 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:20.129243 1012241 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 20:22:20.129309 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:20.149998 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.160156 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.173034 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.216219 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.238861 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.242485 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.244696 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.249737 1012241 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 20:22:20.252551 1012241 out.go:177]   - Using image docker.io/busybox:stable
	I0819 20:22:20.256699 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.257632 1012241 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:20.257653 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 20:22:20.257721 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	W0819 20:22:20.266329 1012241 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 20:22:20.266362 1012241 retry.go:31] will retry after 204.966099ms: ssh: handshake failed: EOF
	I0819 20:22:20.268520 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.306475 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 20:22:20.315969 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:20.373471 1012241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:22:20.493482 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 20:22:20.493570 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 20:22:20.538764 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 20:22:20.538836 1012241 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 20:22:20.549491 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 20:22:20.549571 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 20:22:20.556266 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 20:22:20.556334 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 20:22:20.600851 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 20:22:20.600929 1012241 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 20:22:20.624848 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 20:22:20.624921 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 20:22:20.632377 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 20:22:20.632451 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 20:22:20.663990 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 20:22:20.692929 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 20:22:20.692999 1012241 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 20:22:20.695695 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 20:22:20.698011 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 20:22:20.747179 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 20:22:20.752228 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 20:22:20.755503 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 20:22:20.755576 1012241 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 20:22:20.757190 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 20:22:20.757247 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 20:22:20.776772 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 20:22:20.818545 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 20:22:20.845433 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 20:22:20.845505 1012241 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 20:22:20.853871 1012241 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:20.853941 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 20:22:20.885588 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 20:22:20.885686 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 20:22:20.916560 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 20:22:20.916582 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 20:22:20.965576 1012241 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:20.965612 1012241 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 20:22:21.064716 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 20:22:21.079157 1012241 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:21.079237 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 20:22:21.145137 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 20:22:21.145213 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 20:22:21.196813 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 20:22:21.196896 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 20:22:21.209322 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 20:22:21.213993 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 20:22:21.214068 1012241 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 20:22:21.329982 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 20:22:21.330056 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 20:22:21.355456 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 20:22:21.410252 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 20:22:21.410332 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 20:22:21.480194 1012241 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:21.480267 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 20:22:21.503364 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 20:22:21.503441 1012241 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 20:22:21.574543 1012241 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 20:22:21.574621 1012241 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 20:22:21.639779 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 20:22:21.656252 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 20:22:21.656322 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 20:22:21.685185 1012241 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 20:22:21.685260 1012241 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 20:22:21.764573 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 20:22:21.764650 1012241 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 20:22:21.801338 1012241 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:21.801408 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 20:22:21.839908 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 20:22:21.839979 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 20:22:21.870557 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:21.899067 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 20:22:21.899142 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 20:22:21.985947 1012241 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:21.986021 1012241 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 20:22:22.065511 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 20:22:23.855136 1012241 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.481590046s)
	I0819 20:22:23.855244 1012241 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.548687926s)
	I0819 20:22:23.855383 1012241 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 20:22:23.856972 1012241 node_ready.go:35] waiting up to 6m0s for node "addons-199708" to be "Ready" ...
	I0819 20:22:24.716966 1012241 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-199708" context rescaled to 1 replicas
	I0819 20:22:24.962051 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.297979096s)
	I0819 20:22:24.962161 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.266409446s)
	I0819 20:22:25.775387 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.077306584s)
	I0819 20:22:25.883801 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:26.915995 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.168733835s)
	I0819 20:22:26.916075 1012241 addons.go:475] Verifying addon ingress=true in "addons-199708"
	I0819 20:22:26.916268 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.163978114s)
	I0819 20:22:26.916414 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.139574894s)
	I0819 20:22:26.916518 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.097905065s)
	I0819 20:22:26.916585 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.851843704s)
	I0819 20:22:26.916641 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.707254171s)
	I0819 20:22:26.916975 1012241 addons.go:475] Verifying addon metrics-server=true in "addons-199708"
	I0819 20:22:26.916666 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.561138141s)
	I0819 20:22:26.917011 1012241 addons.go:475] Verifying addon registry=true in "addons-199708"
	I0819 20:22:26.916719 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.276860132s)
	I0819 20:22:26.918612 1012241 out.go:177] * Verifying registry addon...
	I0819 20:22:26.920669 1012241 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-199708 service yakd-dashboard -n yakd-dashboard
	
	I0819 20:22:26.921351 1012241 out.go:177] * Verifying ingress addon...
	I0819 20:22:26.921721 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 20:22:26.924605 1012241 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 20:22:26.935092 1012241 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 20:22:26.935173 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:26.935430 1012241 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 20:22:26.935451 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:26.978952 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.108313678s)
	W0819 20:22:26.979027 1012241 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:26.979062 1012241 retry.go:31] will retry after 240.59173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 20:22:27.220787 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 20:22:27.228666 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.163052943s)
	I0819 20:22:27.228741 1012241 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-199708"
	I0819 20:22:27.233297 1012241 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 20:22:27.236896 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 20:22:27.258373 1012241 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 20:22:27.258401 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:27.427760 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:27.431641 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:27.764654 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:27.925898 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:27.930110 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.256140 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:28.361879 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:28.425899 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.429045 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:28.743569 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:28.926312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:28.929073 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.241845 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.425626 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.428282 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:29.741346 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:29.955358 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:29.955975 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.241668 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.401885 1012241 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.180998384s)
	I0819 20:22:30.425974 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.429007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.551280 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 20:22:30.551422 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:30.574299 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:30.699609 1012241 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 20:22:30.720399 1012241 addons.go:234] Setting addon gcp-auth=true in "addons-199708"
	I0819 20:22:30.720459 1012241 host.go:66] Checking if "addons-199708" exists ...
	I0819 20:22:30.720936 1012241 cli_runner.go:164] Run: docker container inspect addons-199708 --format={{.State.Status}}
	I0819 20:22:30.741238 1012241 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 20:22:30.741293 1012241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-199708
	I0819 20:22:30.742711 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:30.775112 1012241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/addons-199708/id_rsa Username:docker}
	I0819 20:22:30.860987 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:30.868697 1012241 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 20:22:30.871540 1012241 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 20:22:30.874656 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 20:22:30.874717 1012241 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 20:22:30.899595 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 20:22:30.899619 1012241 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 20:22:30.920544 1012241 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:30.920565 1012241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 20:22:30.925915 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:30.931223 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:30.956892 1012241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 20:22:31.241567 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.429084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.433942 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:31.595080 1012241 addons.go:475] Verifying addon gcp-auth=true in "addons-199708"
	I0819 20:22:31.598023 1012241 out.go:177] * Verifying gcp-auth addon...
	I0819 20:22:31.602450 1012241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 20:22:31.608594 1012241 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 20:22:31.608665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:31.747896 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:31.925881 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:31.929353 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.105958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:32.241248 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.426327 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.527562 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:32.609463 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:32.746086 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:32.862059 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:32.925811 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:32.929107 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.106809 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:33.240927 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:33.430554 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.431352 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:33.605713 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:33.740931 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:33.925349 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:33.928133 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.107489 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:34.240842 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.425004 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.428331 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:34.605958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:34.741048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:34.925332 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:34.929007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.107046 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:35.241508 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.361052 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:35.426762 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.429087 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:35.606730 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:35.740754 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:35.925151 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:35.928258 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.106713 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:36.240729 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.425225 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.427977 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:36.606383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:36.740936 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:36.926023 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:36.928658 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.106360 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:37.241048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.426796 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.428756 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:37.605982 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:37.740393 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:37.860408 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:37.925488 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:37.928123 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.105525 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:38.240863 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.425273 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.428465 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:38.605673 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:38.741103 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:38.925318 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:38.928083 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.106465 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:39.240968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.425201 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.428514 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:39.606533 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:39.740759 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:39.925535 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:39.928362 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.106579 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:40.241147 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.361076 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:40.425812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.429498 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:40.605584 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:40.741025 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:40.924846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:40.927986 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.106485 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:41.240749 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.424983 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.428064 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:41.606960 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:41.740362 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:41.925863 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:41.928255 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.117971 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:42.241868 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.361344 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:42.425095 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.428403 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:42.605412 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:42.741213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:42.925030 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:42.929457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.105660 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:43.241206 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.425541 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.428723 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:43.606767 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:43.740536 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:43.925122 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:43.928173 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.106575 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:44.241076 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.362061 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:44.424935 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.427844 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:44.606185 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:44.740698 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:44.925500 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:44.928363 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.106862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:45.243680 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.428216 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.430779 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:45.606065 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:45.741144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:45.925224 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:45.928525 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.106410 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:46.240905 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.424743 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.428303 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:46.605859 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:46.740970 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:46.861207 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:46.925470 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:46.928439 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.105731 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:47.240801 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.425193 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.428555 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:47.605992 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:47.741013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:47.924902 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:47.928381 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.105850 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:48.241063 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.425722 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.428578 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:48.606538 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:48.741033 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:48.863119 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:48.926352 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:48.929561 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.105958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:49.242858 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.425872 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.429496 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:49.616606 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:49.741971 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:49.924914 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:49.928257 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.109741 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:50.241533 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.429146 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.432002 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:50.606477 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:50.741001 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:50.924955 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:50.928457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.115178 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:51.241334 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.361180 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:51.425772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.429487 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:51.606312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:51.740968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:51.925147 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:51.927836 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.107026 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:52.240421 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.425670 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.429110 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:52.606669 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:52.741162 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:52.925388 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:52.928030 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.106450 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:53.240947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.426562 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.429312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:53.605486 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:53.740958 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:53.861430 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:53.925961 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:53.928530 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.105717 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:54.240716 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.425053 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.429322 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:54.605825 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:54.740144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:54.924916 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:54.928294 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.109808 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:55.240433 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.425284 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.429338 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:55.606494 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:55.740686 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:55.861509 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:55.924755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:55.928210 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.108780 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:56.241068 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.426682 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.428680 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:56.606611 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:56.741222 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:56.926544 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:56.928813 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.106605 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:57.241259 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.425625 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.428215 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:57.605566 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:57.741037 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:57.925133 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:57.928687 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.106213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:58.240309 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.362158 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:22:58.425377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:58.429583 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:58.606430 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:58.741024 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:58.925325 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:58.928285 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.105540 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:59.241379 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.424758 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:59.428306 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:22:59.605726 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:22:59.741033 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:22:59.925485 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:22:59.928828 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.120386 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:00.241755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.426176 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:00.430017 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:00.606901 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:00.740547 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:00.860973 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:00.925504 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:00.929700 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.106242 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:01.241006 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.425699 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:01.428254 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:01.605616 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:01.741112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:01.925615 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:01.928600 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.105956 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:02.240736 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.425183 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:02.428152 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:02.606614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:02.741396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:02.924952 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:02.928923 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.106287 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:03.240846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.361108 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:03.424707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:03.428018 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:03.606193 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:03.740364 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:03.925224 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:03.927756 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.106002 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:04.240737 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.424844 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:04.428312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:04.605457 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:04.740978 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:04.925109 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:04.928783 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.106299 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:05.240576 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.424934 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:05.428410 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:05.605680 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:05.740202 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:05.861110 1012241 node_ready.go:53] node "addons-199708" has status "Ready":"False"
	I0819 20:23:05.925417 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:05.934039 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.139278 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:06.264253 1012241 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 20:23:06.264283 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.367990 1012241 node_ready.go:49] node "addons-199708" has status "Ready":"True"
	I0819 20:23:06.368049 1012241 node_ready.go:38] duration metric: took 42.51079439s for node "addons-199708" to be "Ready" ...
	I0819 20:23:06.368061 1012241 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:23:06.410854 1012241 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:06.519384 1012241 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 20:23:06.519412 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:06.520313 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:06.687327 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:06.827221 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:06.951844 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:06.955705 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.106651 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:07.243343 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.418143 1012241 pod_ready.go:93] pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.418167 1012241 pod_ready.go:82] duration metric: took 1.007278517s for pod "coredns-6f6b679f8f-6n4mb" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.418189 1012241 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.424838 1012241 pod_ready.go:93] pod "etcd-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.424865 1012241 pod_ready.go:82] duration metric: took 6.667929ms for pod "etcd-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.424881 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.426077 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:07.430677 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.432468 1012241 pod_ready.go:93] pod "kube-apiserver-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.432493 1012241 pod_ready.go:82] duration metric: took 7.603948ms for pod "kube-apiserver-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.432505 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.438162 1012241 pod_ready.go:93] pod "kube-controller-manager-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.438189 1012241 pod_ready.go:82] duration metric: took 5.675804ms for pod "kube-controller-manager-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.438207 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-99r72" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.561426 1012241 pod_ready.go:93] pod "kube-proxy-99r72" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.561451 1012241 pod_ready.go:82] duration metric: took 123.235387ms for pod "kube-proxy-99r72" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.561464 1012241 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.605634 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:07.742987 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:07.927210 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:07.930265 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:07.961378 1012241 pod_ready.go:93] pod "kube-scheduler-addons-199708" in "kube-system" namespace has status "Ready":"True"
	I0819 20:23:07.961405 1012241 pod_ready.go:82] duration metric: took 399.93288ms for pod "kube-scheduler-addons-199708" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:07.961416 1012241 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace to be "Ready" ...
	I0819 20:23:08.113189 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:08.248110 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.426298 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:08.430282 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:08.608619 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:08.742933 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:08.926400 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:08.929383 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.108420 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:09.243707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.425900 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:09.431419 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.608497 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:09.753010 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:09.931411 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:09.932877 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:09.970271 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:10.107963 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:10.244363 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.431720 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:10.438715 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:10.607238 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:10.743646 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:10.929928 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:10.935211 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.106864 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:11.242993 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.437016 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:11.438702 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.606872 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:11.755267 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:11.926045 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:11.939855 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:11.978557 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:12.108087 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:12.261953 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.431316 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:12.433201 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:12.612775 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:12.745961 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:12.927415 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:12.936529 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.106530 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:13.243112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.426894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:13.436697 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:13.607182 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:13.756573 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:13.926950 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:13.933048 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.106917 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:14.242275 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.425535 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:14.429131 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:14.471502 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:14.607707 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:14.742823 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:14.931004 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:14.931939 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.111773 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:15.243013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.425208 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:15.431312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:15.606223 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:15.741865 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:15.926604 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:15.929792 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.106880 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:16.242579 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.426826 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:16.431106 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.606783 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:16.741846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:16.926260 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:16.929354 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:16.971613 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:17.106552 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:17.243925 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:17.426086 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:17.430223 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:17.607383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:17.742891 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:17.926772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:17.931520 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.106855 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:18.243739 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:18.427393 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:18.433766 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.607596 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:18.744973 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:18.926396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:18.931800 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:18.974817 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:19.106894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:19.246671 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:19.429404 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:19.433358 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:19.605816 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:19.742676 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:19.928912 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:19.934182 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.107286 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:20.242506 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:20.426711 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:20.430923 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:20.606968 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:20.741692 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:20.925871 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:20.929630 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.106831 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:21.243158 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:21.427230 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:21.429231 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:21.467543 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:21.607084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:21.742084 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:21.926742 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:21.930765 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.106507 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:22.241646 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:22.425965 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:22.428780 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:22.606938 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:22.742926 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:22.925648 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:22.929281 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.108099 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:23.242886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:23.427536 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:23.431035 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:23.473870 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:23.615888 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:23.742253 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:23.928230 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:23.928793 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.105944 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:24.243046 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:24.428320 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:24.433709 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:24.606603 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:24.744723 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:24.926723 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:24.932630 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.108431 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:25.242319 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:25.446337 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:25.467030 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.609112 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:25.746233 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:25.926763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:25.930667 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:25.968104 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:26.106755 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:26.242148 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:26.529506 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:26.530804 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:26.650135 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:26.742518 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:26.927061 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:26.930018 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.106666 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:27.243421 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:27.426709 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:27.429793 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:27.606868 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:27.742281 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:27.925853 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:27.928635 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.106013 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:28.242198 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:28.425893 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:28.428823 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:28.468478 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:28.606443 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:28.742259 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:28.925862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:28.929981 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.106986 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:29.242359 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:29.426212 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:29.431372 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:29.606662 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:29.742603 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:29.927969 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:29.929312 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.108552 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:30.243245 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:30.428895 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:30.430113 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.606665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:30.741867 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:30.928390 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:30.934260 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:30.969308 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:31.107845 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:31.245018 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:31.426240 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:31.430773 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:31.611519 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:31.743930 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:31.927090 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:31.929769 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.106614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:32.242324 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:32.428048 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:32.429781 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:32.606827 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:32.742145 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:32.925149 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:32.928790 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.106812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:33.244463 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:33.426630 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:33.430239 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:33.473106 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:33.607251 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:33.743705 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:33.926477 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:33.931404 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.107521 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:34.242890 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:34.428656 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:34.430124 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:34.605996 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:34.742758 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:34.926392 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:34.930731 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.106810 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:35.242454 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:35.454111 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:35.454704 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:35.475759 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:35.605965 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:35.741956 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:35.928234 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:35.932677 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.106883 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:36.241869 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:36.425413 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:36.429293 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:36.606404 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:36.742731 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:36.927462 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:36.930604 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.106942 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:37.241886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:37.434925 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:37.437377 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.606631 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:37.742268 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:37.925766 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:37.928469 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:37.967546 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:38.108973 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:38.242364 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:38.431241 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:38.434765 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:38.624385 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:38.742659 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:38.926662 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:38.929183 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.105995 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:39.249137 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:39.435151 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:39.435420 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.606999 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:39.742340 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:39.925953 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:39.928733 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:39.968689 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:40.107008 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:40.242670 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:40.434292 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:40.439798 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:40.606347 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:40.741257 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:40.926572 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:40.928419 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.106673 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:41.242295 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:41.432763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:41.433810 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.610349 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:41.744939 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:41.942213 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:41.944701 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:41.984196 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:42.107251 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:42.242964 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:42.429014 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:42.434440 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:42.606888 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:42.742969 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:42.927770 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:42.931378 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:43.106788 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:43.243234 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:43.427061 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:43.432064 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:43.606665 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:43.743065 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:43.926273 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:43.931145 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:44.107279 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:44.242807 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:44.426780 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:44.431234 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:44.469170 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:44.607378 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:44.742856 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:44.930947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:44.933648 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:45.114757 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:45.243519 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:45.434944 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:45.435653 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:45.607174 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:45.744762 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:45.929817 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:45.932132 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:46.106832 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:46.242963 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:46.426114 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:46.431496 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:46.471709 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:46.605985 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:46.741454 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:46.926573 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:46.944027 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:47.107199 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:47.242312 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:47.435744 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:47.436909 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:47.607040 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:47.742095 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:47.926241 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:47.931054 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.106793 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:48.241882 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:48.425791 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:48.428808 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.606284 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:48.742610 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:48.927118 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:48.933035 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:48.971710 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:49.113118 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:49.243486 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:49.426862 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:49.429999 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:49.606542 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:49.742524 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:49.926325 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:49.929545 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:50.107470 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:50.241947 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:50.426194 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:50.428740 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:50.606190 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:50.742381 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:50.925672 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:50.928489 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:51.106394 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:51.244407 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:51.426812 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:51.431189 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:51.476804 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:51.607274 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:51.742661 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:51.927652 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:51.933339 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:52.107408 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:52.242916 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:52.427498 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:52.435090 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:52.606990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:52.745083 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:52.925795 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:52.930976 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.107187 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.242568 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:53.426201 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:53.430602 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.606818 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:53.742172 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:53.938906 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:53.946162 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:53.974016 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:54.106146 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.241814 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:54.425811 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:54.428228 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:54.606057 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:54.742322 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:54.926288 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 20:23:54.942120 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:55.106846 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:55.242518 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:55.425989 1012241 kapi.go:107] duration metric: took 1m28.504263748s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 20:23:55.428435 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:55.606792 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:55.742705 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:55.929222 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:56.106208 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:56.242507 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:56.430007 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:56.468863 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:56.607121 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:56.742783 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:56.929835 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:57.106319 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:57.244123 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:57.432349 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:57.606745 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:57.741848 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:57.929234 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:58.114165 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:58.241990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:58.429930 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:58.476249 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:23:58.607395 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:58.742858 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:58.929775 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:59.106582 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:59.242772 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:59.434457 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:23:59.606757 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:23:59.742434 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:23:59.931306 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:00.135247 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:00.257261 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:00.559170 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:00.560676 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:00.607377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:00.744080 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:00.929409 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:01.107343 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:01.243377 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:01.433571 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:01.607188 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:01.743222 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:01.930317 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.107162 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:02.242751 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:02.429580 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.607185 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:02.744614 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:02.929525 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:02.968822 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:03.107144 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:03.242410 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:03.430998 1012241 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 20:24:03.618882 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:03.743115 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:03.931415 1012241 kapi.go:107] duration metric: took 1m37.006805239s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 20:24:04.107015 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:04.242192 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:04.606355 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:04.741695 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:04.968964 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:05.108942 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:05.246114 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:05.607881 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:05.742002 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:06.106715 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:06.241954 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:06.606735 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:06.741293 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:07.107383 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:07.242426 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:07.468562 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:07.606702 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:07.742218 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:08.106493 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:08.244691 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:08.605990 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:08.741726 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:09.107592 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:09.242606 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:09.471398 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:09.610249 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:09.748396 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:10.107379 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:10.246776 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:10.606733 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:10.742801 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.126894 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:11.241712 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.610338 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:11.742892 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:11.969125 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:12.109258 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:12.242082 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:12.613176 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:12.743057 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:13.107217 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:13.244499 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:13.606427 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:13.745367 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:14.109361 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:14.243696 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:14.471998 1012241 pod_ready.go:103] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"False"
	I0819 20:24:14.606097 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:14.742760 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:15.110878 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:15.242423 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:15.606886 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 20:24:15.743828 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.107118 1012241 kapi.go:107] duration metric: took 1m44.504665719s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 20:24:16.109941 1012241 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-199708 cluster.
	I0819 20:24:16.112759 1012241 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 20:24:16.115432 1012241 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 20:24:16.244763 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.475345 1012241 pod_ready.go:93] pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace has status "Ready":"True"
	I0819 20:24:16.475370 1012241 pod_ready.go:82] duration metric: took 1m8.513946192s for pod "metrics-server-8988944d9-phnbr" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.475390 1012241 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.489003 1012241 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace has status "Ready":"True"
	I0819 20:24:16.489107 1012241 pod_ready.go:82] duration metric: took 13.707029ms for pod "nvidia-device-plugin-daemonset-6p75r" in "kube-system" namespace to be "Ready" ...
	I0819 20:24:16.489153 1012241 pod_ready.go:39] duration metric: took 1m10.121073383s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:24:16.489202 1012241 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:24:16.489273 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:16.489419 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:16.570925 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:16.570998 1012241 cri.go:89] found id: ""
	I0819 20:24:16.571021 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:16.571140 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.578051 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:16.578169 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:16.632509 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:16.632590 1012241 cri.go:89] found id: ""
	I0819 20:24:16.632613 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:16.632742 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.638483 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:16.638603 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:16.711434 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:16.711540 1012241 cri.go:89] found id: ""
	I0819 20:24:16.711566 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:16.711779 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.721069 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:16.721297 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:16.745367 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:16.801901 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:16.801926 1012241 cri.go:89] found id: ""
	I0819 20:24:16.801936 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:16.801996 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.810474 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:16.810555 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:16.903763 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:16.903788 1012241 cri.go:89] found id: ""
	I0819 20:24:16.903797 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:16.903854 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:16.910320 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:16.910457 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:17.024005 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:17.024078 1012241 cri.go:89] found id: ""
	I0819 20:24:17.024099 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:17.024194 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:17.032001 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:17.032137 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:17.106335 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:17.106362 1012241 cri.go:89] found id: ""
	I0819 20:24:17.106370 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:17.106462 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:17.110824 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:17.110862 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:17.242732 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:17.580241 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:17.580319 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:17.677887 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:17.677965 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:17.747460 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:17.759447 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:17.759620 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:17.827313 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:17.827398 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:17.950772 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:17.950811 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:18.001915 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:18.001953 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:18.089272 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.089525 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:18.089744 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.089975 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:18.132482 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:18.132566 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:18.155620 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:18.155691 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:18.243244 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:18.251320 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:18.251354 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:18.302939 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:18.302972 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:18.349975 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:18.350006 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:18.391478 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:18.391506 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:18.391556 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:18.391579 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.391587 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:18.391600 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:18.391606 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:18.391619 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:18.391626 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:18.742261 1012241 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 20:24:19.242706 1012241 kapi.go:107] duration metric: took 1m52.005821099s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 20:24:19.244386 1012241 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0819 20:24:19.246034 1012241 addons.go:510] duration metric: took 1m59.634203667s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0819 20:24:28.393157 1012241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:24:28.407073 1012241 api_server.go:72] duration metric: took 2m8.795670662s to wait for apiserver process to appear ...
	I0819 20:24:28.407098 1012241 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:24:28.407135 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:28.407197 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:28.454246 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:28.454294 1012241 cri.go:89] found id: ""
	I0819 20:24:28.454302 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:28.454362 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.457763 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:28.457831 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:28.498431 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:28.498453 1012241 cri.go:89] found id: ""
	I0819 20:24:28.498461 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:28.498516 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.501884 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:28.501955 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:28.547991 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:28.548013 1012241 cri.go:89] found id: ""
	I0819 20:24:28.548021 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:28.548084 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.551656 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:28.551738 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:28.591665 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:28.591689 1012241 cri.go:89] found id: ""
	I0819 20:24:28.591698 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:28.591765 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.595477 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:28.595555 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:28.634837 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:28.634862 1012241 cri.go:89] found id: ""
	I0819 20:24:28.634870 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:28.634927 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.638513 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:28.638584 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:28.682379 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:28.682412 1012241 cri.go:89] found id: ""
	I0819 20:24:28.682447 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:28.682521 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.686046 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:28.686140 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:28.727462 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:28.727532 1012241 cri.go:89] found id: ""
	I0819 20:24:28.727540 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:28.727601 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:28.731328 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:28.731356 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:28.774182 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:28.774213 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:28.829817 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:28.829851 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:28.931277 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:28.931312 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:28.986656 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:28.986902 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:28.987092 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:28.987324 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:29.024723 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:29.024757 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:29.041183 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:29.041211 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:29.112777 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:29.112811 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:29.165323 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:29.165357 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:29.212570 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:29.212603 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:29.351681 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:29.351717 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:29.391732 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:29.391764 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:29.477318 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:29.477354 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:29.541960 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:29.541988 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:29.542044 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:29.542060 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:29.542075 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:29.542082 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:29.542091 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:29.542099 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:29.542106 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:39.543323 1012241 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:24:39.552891 1012241 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 20:24:39.554226 1012241 api_server.go:141] control plane version: v1.31.0
	I0819 20:24:39.554250 1012241 api_server.go:131] duration metric: took 11.147145485s to wait for apiserver health ...
	I0819 20:24:39.554259 1012241 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:24:39.554283 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:24:39.554356 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:24:39.609850 1012241 cri.go:89] found id: "c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:39.609881 1012241 cri.go:89] found id: ""
	I0819 20:24:39.609890 1012241 logs.go:276] 1 containers: [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8]
	I0819 20:24:39.609952 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.613500 1012241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:24:39.613575 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:24:39.654987 1012241 cri.go:89] found id: "926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:39.655009 1012241 cri.go:89] found id: ""
	I0819 20:24:39.655017 1012241 logs.go:276] 1 containers: [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1]
	I0819 20:24:39.655078 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.659467 1012241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:24:39.659537 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:24:39.733901 1012241 cri.go:89] found id: "4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:39.733923 1012241 cri.go:89] found id: ""
	I0819 20:24:39.733931 1012241 logs.go:276] 1 containers: [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec]
	I0819 20:24:39.733987 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.737487 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:24:39.737563 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:24:39.783938 1012241 cri.go:89] found id: "7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:39.783963 1012241 cri.go:89] found id: ""
	I0819 20:24:39.783970 1012241 logs.go:276] 1 containers: [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0]
	I0819 20:24:39.784033 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.787772 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:24:39.787844 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:24:39.836687 1012241 cri.go:89] found id: "0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:39.836712 1012241 cri.go:89] found id: ""
	I0819 20:24:39.836720 1012241 logs.go:276] 1 containers: [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57]
	I0819 20:24:39.836778 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.840569 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:24:39.840656 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:24:39.892838 1012241 cri.go:89] found id: "17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:39.892862 1012241 cri.go:89] found id: ""
	I0819 20:24:39.892870 1012241 logs.go:276] 1 containers: [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529]
	I0819 20:24:39.892929 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:39.900058 1012241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:24:39.900187 1012241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:24:40.030238 1012241 cri.go:89] found id: "6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:40.030266 1012241 cri.go:89] found id: ""
	I0819 20:24:40.030279 1012241 logs.go:276] 1 containers: [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de]
	I0819 20:24:40.030406 1012241 ssh_runner.go:195] Run: which crictl
	I0819 20:24:40.040785 1012241 logs.go:123] Gathering logs for kubelet ...
	I0819 20:24:40.040817 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 20:24:40.096987 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:40.097241 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:40.097430 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:40.097761 1012241 logs.go:138] Found kubelet problem: Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:40.140134 1012241 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:24:40.140172 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:24:40.379574 1012241 logs.go:123] Gathering logs for kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] ...
	I0819 20:24:40.379606 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0"
	I0819 20:24:40.459970 1012241 logs.go:123] Gathering logs for kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] ...
	I0819 20:24:40.460001 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de"
	I0819 20:24:40.511002 1012241 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:24:40.511038 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:24:40.609475 1012241 logs.go:123] Gathering logs for dmesg ...
	I0819 20:24:40.609512 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:24:40.627600 1012241 logs.go:123] Gathering logs for kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] ...
	I0819 20:24:40.627633 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8"
	I0819 20:24:40.708954 1012241 logs.go:123] Gathering logs for etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] ...
	I0819 20:24:40.708986 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1"
	I0819 20:24:40.773054 1012241 logs.go:123] Gathering logs for coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] ...
	I0819 20:24:40.773097 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec"
	I0819 20:24:40.818995 1012241 logs.go:123] Gathering logs for kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] ...
	I0819 20:24:40.819028 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57"
	I0819 20:24:40.862651 1012241 logs.go:123] Gathering logs for kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] ...
	I0819 20:24:40.862683 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529"
	I0819 20:24:40.959927 1012241 logs.go:123] Gathering logs for container status ...
	I0819 20:24:40.959973 1012241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:24:41.009965 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:41.009995 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 20:24:41.010086 1012241 out.go:270] X Problems detected in kubelet:
	W0819 20:24:41.010114 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995162    1507 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:41.010130 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995217    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	W0819 20:24:41.010148 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: W0819 20:23:05.995389    1507 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-199708" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-199708' and this object
	W0819 20:24:41.010162 1012241 out.go:270]   Aug 19 20:23:05 addons-199708 kubelet[1507]: E0819 20:23:05.995412    1507 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-199708\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-199708' and this object" logger="UnhandledError"
	I0819 20:24:41.010169 1012241 out.go:358] Setting ErrFile to fd 2...
	I0819 20:24:41.010180 1012241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:24:51.024938 1012241 system_pods.go:59] 18 kube-system pods found
	I0819 20:24:51.024987 1012241 system_pods.go:61] "coredns-6f6b679f8f-6n4mb" [c3402fe2-9566-4f90-a512-9f614a55dece] Running
	I0819 20:24:51.024994 1012241 system_pods.go:61] "csi-hostpath-attacher-0" [9c27b821-4447-46b9-b1ad-2aa93595632b] Running
	I0819 20:24:51.024999 1012241 system_pods.go:61] "csi-hostpath-resizer-0" [935e08cf-2eb9-45aa-88d7-22e89cc8528c] Running
	I0819 20:24:51.025003 1012241 system_pods.go:61] "csi-hostpathplugin-mp2fj" [c6450a00-7d90-4f5f-ac88-97e1805effe3] Running
	I0819 20:24:51.025007 1012241 system_pods.go:61] "etcd-addons-199708" [f3b7d38f-e384-4ac0-a896-f06a60a5b650] Running
	I0819 20:24:51.025012 1012241 system_pods.go:61] "kindnet-frmsm" [293a5e8d-a8b5-470d-a110-bde48e311ad7] Running
	I0819 20:24:51.025016 1012241 system_pods.go:61] "kube-apiserver-addons-199708" [7eef55b1-1f3d-4d7d-a66f-a2b96d167158] Running
	I0819 20:24:51.025020 1012241 system_pods.go:61] "kube-controller-manager-addons-199708" [6299ec89-5e0a-4fbc-a136-274a9f0ad339] Running
	I0819 20:24:51.025026 1012241 system_pods.go:61] "kube-ingress-dns-minikube" [18bbf659-adcd-4f3c-8a24-47c9af3dcf74] Running
	I0819 20:24:51.025032 1012241 system_pods.go:61] "kube-proxy-99r72" [36b5b22d-de71-471c-9b87-896b105a27cc] Running
	I0819 20:24:51.025036 1012241 system_pods.go:61] "kube-scheduler-addons-199708" [6cc1f06d-47de-41c0-9c60-df3cb6229707] Running
	I0819 20:24:51.025040 1012241 system_pods.go:61] "metrics-server-8988944d9-phnbr" [9ff0d452-fc9c-4259-bc8e-032f3ad5350a] Running
	I0819 20:24:51.025045 1012241 system_pods.go:61] "nvidia-device-plugin-daemonset-6p75r" [03198291-96ab-4c9c-8393-70aa68bb887b] Running
	I0819 20:24:51.025049 1012241 system_pods.go:61] "registry-6fb4cdfc84-2d8zw" [571a9575-3986-40cc-80d1-071415cf3a04] Running
	I0819 20:24:51.025053 1012241 system_pods.go:61] "registry-proxy-mtrlv" [fe09b5f8-66ed-4907-8d46-d177a6e3922f] Running
	I0819 20:24:51.025057 1012241 system_pods.go:61] "snapshot-controller-56fcc65765-65dzc" [1f0b80b1-656d-4d0a-8e51-84aeeee65b66] Running
	I0819 20:24:51.025062 1012241 system_pods.go:61] "snapshot-controller-56fcc65765-t9q62" [f4909d7e-03a0-4e63-b3e8-7addc77d9b4b] Running
	I0819 20:24:51.025067 1012241 system_pods.go:61] "storage-provisioner" [3e5f85cd-821b-4050-823b-b31a35b1d14a] Running
	I0819 20:24:51.025074 1012241 system_pods.go:74] duration metric: took 11.470808338s to wait for pod list to return data ...
	I0819 20:24:51.025119 1012241 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:24:51.028830 1012241 default_sa.go:45] found service account: "default"
	I0819 20:24:51.028863 1012241 default_sa.go:55] duration metric: took 3.727656ms for default service account to be created ...
	I0819 20:24:51.028874 1012241 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:24:51.040327 1012241 system_pods.go:86] 18 kube-system pods found
	I0819 20:24:51.040374 1012241 system_pods.go:89] "coredns-6f6b679f8f-6n4mb" [c3402fe2-9566-4f90-a512-9f614a55dece] Running
	I0819 20:24:51.040385 1012241 system_pods.go:89] "csi-hostpath-attacher-0" [9c27b821-4447-46b9-b1ad-2aa93595632b] Running
	I0819 20:24:51.040391 1012241 system_pods.go:89] "csi-hostpath-resizer-0" [935e08cf-2eb9-45aa-88d7-22e89cc8528c] Running
	I0819 20:24:51.040396 1012241 system_pods.go:89] "csi-hostpathplugin-mp2fj" [c6450a00-7d90-4f5f-ac88-97e1805effe3] Running
	I0819 20:24:51.040402 1012241 system_pods.go:89] "etcd-addons-199708" [f3b7d38f-e384-4ac0-a896-f06a60a5b650] Running
	I0819 20:24:51.040407 1012241 system_pods.go:89] "kindnet-frmsm" [293a5e8d-a8b5-470d-a110-bde48e311ad7] Running
	I0819 20:24:51.040412 1012241 system_pods.go:89] "kube-apiserver-addons-199708" [7eef55b1-1f3d-4d7d-a66f-a2b96d167158] Running
	I0819 20:24:51.040418 1012241 system_pods.go:89] "kube-controller-manager-addons-199708" [6299ec89-5e0a-4fbc-a136-274a9f0ad339] Running
	I0819 20:24:51.040424 1012241 system_pods.go:89] "kube-ingress-dns-minikube" [18bbf659-adcd-4f3c-8a24-47c9af3dcf74] Running
	I0819 20:24:51.040431 1012241 system_pods.go:89] "kube-proxy-99r72" [36b5b22d-de71-471c-9b87-896b105a27cc] Running
	I0819 20:24:51.040436 1012241 system_pods.go:89] "kube-scheduler-addons-199708" [6cc1f06d-47de-41c0-9c60-df3cb6229707] Running
	I0819 20:24:51.040441 1012241 system_pods.go:89] "metrics-server-8988944d9-phnbr" [9ff0d452-fc9c-4259-bc8e-032f3ad5350a] Running
	I0819 20:24:51.040450 1012241 system_pods.go:89] "nvidia-device-plugin-daemonset-6p75r" [03198291-96ab-4c9c-8393-70aa68bb887b] Running
	I0819 20:24:51.040455 1012241 system_pods.go:89] "registry-6fb4cdfc84-2d8zw" [571a9575-3986-40cc-80d1-071415cf3a04] Running
	I0819 20:24:51.040459 1012241 system_pods.go:89] "registry-proxy-mtrlv" [fe09b5f8-66ed-4907-8d46-d177a6e3922f] Running
	I0819 20:24:51.040464 1012241 system_pods.go:89] "snapshot-controller-56fcc65765-65dzc" [1f0b80b1-656d-4d0a-8e51-84aeeee65b66] Running
	I0819 20:24:51.040473 1012241 system_pods.go:89] "snapshot-controller-56fcc65765-t9q62" [f4909d7e-03a0-4e63-b3e8-7addc77d9b4b] Running
	I0819 20:24:51.040477 1012241 system_pods.go:89] "storage-provisioner" [3e5f85cd-821b-4050-823b-b31a35b1d14a] Running
	I0819 20:24:51.040485 1012241 system_pods.go:126] duration metric: took 11.605134ms to wait for k8s-apps to be running ...
	I0819 20:24:51.040498 1012241 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:24:51.040564 1012241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:24:51.058658 1012241 system_svc.go:56] duration metric: took 18.150789ms WaitForService to wait for kubelet
	I0819 20:24:51.058692 1012241 kubeadm.go:582] duration metric: took 2m31.447294246s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:24:51.058737 1012241 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:24:51.063276 1012241 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:24:51.063314 1012241 node_conditions.go:123] node cpu capacity is 2
	I0819 20:24:51.063328 1012241 node_conditions.go:105] duration metric: took 4.579367ms to run NodePressure ...
	I0819 20:24:51.063342 1012241 start.go:241] waiting for startup goroutines ...
	I0819 20:24:51.063372 1012241 start.go:246] waiting for cluster config update ...
	I0819 20:24:51.063395 1012241 start.go:255] writing updated cluster config ...
	I0819 20:24:51.063746 1012241 ssh_runner.go:195] Run: rm -f paused
	I0819 20:24:51.420395 1012241 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:24:51.422289 1012241 out.go:177] * Done! kubectl is now configured to use "addons-199708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.423181916Z" level=info msg="Removed pod sandbox: 7825604dd8d55b555307db98057ed34fc79224a4b2020cf8d3bb5bdcb482dd02" id=9a23b742-624c-4674-9a02-d6b3446b57e8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.589652070Z" level=warning msg="Stopping container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=3f6fd735-47e0-457b-bf27-4fd2221d2657 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:29:15 addons-199708 conmon[4630]: conmon 7bcc772078f217c5bb45 <ninfo>: container 4641 exited with status 137
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.743757438Z" level=info msg="Stopped container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8: ingress-nginx/ingress-nginx-controller-bc57996ff-dl6tk/controller" id=3f6fd735-47e0-457b-bf27-4fd2221d2657 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.744794813Z" level=info msg="Stopping pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=9f395f7e-8055-444c-9f15-4f50827af1fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.748149021Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-2QXCLE4DPIYKBF56 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-N32A6SPGIXOX2FUR - [0:0]\n-X KUBE-HP-2QXCLE4DPIYKBF56\n-X KUBE-HP-N32A6SPGIXOX2FUR\nCOMMIT\n"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.749494226Z" level=info msg="Closing host port tcp:80"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.749546911Z" level=info msg="Closing host port tcp:443"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.750954212Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.750983397Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.751146498Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-dl6tk Namespace:ingress-nginx ID:e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75 UID:f4d56a1d-6a1c-4eef-8328-e7af16f27b45 NetNS:/var/run/netns/c71fabdf-1470-434f-b48b-6544e59fa7d0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.751281643Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-dl6tk from CNI network \"kindnet\" (type=ptp)"
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.775342013Z" level=info msg="Stopped pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=9f395f7e-8055-444c-9f15-4f50827af1fb name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.860579701Z" level=info msg="Removing container: 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8" id=6e02f5c7-7b6f-4bad-ae31-503e346b68b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 20:29:15 addons-199708 crio[969]: time="2024-08-19 20:29:15.877042282Z" level=info msg="Removed container 7bcc772078f217c5bb453128e5317de7d0cd183fff3a41cdfe5d4909643a01d8: ingress-nginx/ingress-nginx-controller-bc57996ff-dl6tk/controller" id=6e02f5c7-7b6f-4bad-ae31-503e346b68b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 20:30:15 addons-199708 crio[969]: time="2024-08-19 20:30:15.426455118Z" level=info msg="Stopping pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=68f339bf-281d-4b7d-ba73-3c44be26e002 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:30:15 addons-199708 crio[969]: time="2024-08-19 20:30:15.426504570Z" level=info msg="Stopped pod sandbox (already stopped): e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=68f339bf-281d-4b7d-ba73-3c44be26e002 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:30:15 addons-199708 crio[969]: time="2024-08-19 20:30:15.426864387Z" level=info msg="Removing pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=dd28bf83-1d37-44f2-90e6-686c86093acf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:30:15 addons-199708 crio[969]: time="2024-08-19 20:30:15.435640768Z" level=info msg="Removed pod sandbox: e43adfdd85c9953495a958c775bd99c52de797977d3c2988600259c843369e75" id=dd28bf83-1d37-44f2-90e6-686c86093acf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 20:31:30 addons-199708 crio[969]: time="2024-08-19 20:31:30.570445353Z" level=info msg="Stopping container: 638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4 (timeout: 30s)" id=a8feb80c-45af-4b82-96e8-52e85bb7d16a name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:31:31 addons-199708 crio[969]: time="2024-08-19 20:31:31.753327379Z" level=info msg="Stopped container 638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4: kube-system/metrics-server-8988944d9-phnbr/metrics-server" id=a8feb80c-45af-4b82-96e8-52e85bb7d16a name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 20:31:31 addons-199708 crio[969]: time="2024-08-19 20:31:31.754444385Z" level=info msg="Stopping pod sandbox: 6cccec059b98a59dbf5a9ff1b4821409b4c9dc5a56a9929d25622c9db44378ff" id=b7718dd2-3611-4cdb-b072-aabc7d651620 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 20:31:31 addons-199708 crio[969]: time="2024-08-19 20:31:31.754677524Z" level=info msg="Got pod network &{Name:metrics-server-8988944d9-phnbr Namespace:kube-system ID:6cccec059b98a59dbf5a9ff1b4821409b4c9dc5a56a9929d25622c9db44378ff UID:9ff0d452-fc9c-4259-bc8e-032f3ad5350a NetNS:/var/run/netns/7a903dff-5441-424f-98bb-4926a731db1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 20:31:31 addons-199708 crio[969]: time="2024-08-19 20:31:31.754825477Z" level=info msg="Deleting pod kube-system_metrics-server-8988944d9-phnbr from CNI network \"kindnet\" (type=ptp)"
	Aug 19 20:31:31 addons-199708 crio[969]: time="2024-08-19 20:31:31.791836791Z" level=info msg="Stopped pod sandbox: 6cccec059b98a59dbf5a9ff1b4821409b4c9dc5a56a9929d25622c9db44378ff" id=b7718dd2-3611-4cdb-b072-aabc7d651620 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9029ac3c1990f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   88c13fa84b6fd       hello-world-app-55bf9c44b4-r2xfn
	459b1733976de       docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6                         4 minutes ago       Running             nginx                     0                   d9862f0d07964       nginx
	db3f6e8bef454       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   145db0b5e9dd9       busybox
	638d90220b8aa       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   8 minutes ago       Exited              metrics-server            0                   6cccec059b98a       metrics-server-8988944d9-phnbr
	4496c326dd4d9       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   ca321f2053e83       coredns-6f6b679f8f-6n4mb
	572c16f172949       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   aa638c4fe2aa1       storage-provisioner
	6bdf6081a42b6       docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64                      9 minutes ago       Running             kindnet-cni               0                   ace2136025d5c       kindnet-frmsm
	0e164c1098e69       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89                                                        9 minutes ago       Running             kube-proxy                0                   9676de5532aa5       kube-proxy-99r72
	c5ee1a4b65685       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388                                                        9 minutes ago       Running             kube-apiserver            0                   f4881709540e9       kube-apiserver-addons-199708
	17ec9f70f07aa       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd                                                        9 minutes ago       Running             kube-controller-manager   0                   292f72858b202       kube-controller-manager-addons-199708
	7f089b595eb71       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb                                                        9 minutes ago       Running             kube-scheduler            0                   1f54969b6168f       kube-scheduler-addons-199708
	926dc4caa041d       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago       Running             etcd                      0                   73d962ba063cb       etcd-addons-199708
	
	
	==> coredns [4496c326dd4d9a3ff2e3a885ab411816b9ff5078f1f9fa33fcf51557b7fe96ec] <==
	[INFO] 10.244.0.13:33588 - 5635 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002565193s
	[INFO] 10.244.0.13:59518 - 62271 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146879s
	[INFO] 10.244.0.13:59518 - 45882 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086334s
	[INFO] 10.244.0.13:56355 - 46575 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131527s
	[INFO] 10.244.0.13:56355 - 10491 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000069571s
	[INFO] 10.244.0.13:51837 - 20507 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073722s
	[INFO] 10.244.0.13:51837 - 16670 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076127s
	[INFO] 10.244.0.13:43970 - 33709 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052365s
	[INFO] 10.244.0.13:43970 - 16040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077817s
	[INFO] 10.244.0.13:51757 - 50128 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001669848s
	[INFO] 10.244.0.13:51757 - 42198 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001909272s
	[INFO] 10.244.0.13:58192 - 37642 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071081s
	[INFO] 10.244.0.13:58192 - 4360 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079138s
	[INFO] 10.244.0.20:36176 - 15129 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252765s
	[INFO] 10.244.0.20:53341 - 46364 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000269627s
	[INFO] 10.244.0.20:42761 - 60138 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000268421s
	[INFO] 10.244.0.20:33836 - 2583 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201705s
	[INFO] 10.244.0.20:59384 - 20410 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000166456s
	[INFO] 10.244.0.20:34924 - 47363 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000247867s
	[INFO] 10.244.0.20:49788 - 22871 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002725972s
	[INFO] 10.244.0.20:38962 - 38903 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003146753s
	[INFO] 10.244.0.20:38215 - 64776 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001079066s
	[INFO] 10.244.0.20:60615 - 49563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00137435s
	[INFO] 10.244.0.23:45190 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221341s
	[INFO] 10.244.0.23:60611 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161976s
	
	
	==> describe nodes <==
	Name:               addons-199708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-199708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=addons-199708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T20_22_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-199708
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-199708
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 20:29:25 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 20:29:25 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 20:29:25 +0000   Mon, 19 Aug 2024 20:22:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 20:29:25 +0000   Mon, 19 Aug 2024 20:23:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-199708
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 62b70ead3e954443b0f62bd9077737ad
	  System UUID:                ede58689-a7ef-4dc9-a622-03d05ef9b23c
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  default                     hello-world-app-55bf9c44b4-r2xfn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 coredns-6f6b679f8f-6n4mb                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m12s
	  kube-system                 etcd-addons-199708                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m18s
	  kube-system                 kindnet-frmsm                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m13s
	  kube-system                 kube-apiserver-addons-199708             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-addons-199708    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-99r72                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-addons-199708             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 9m6s   kube-proxy       
	  Normal   Starting                 9m18s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m18s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m17s  kubelet          Node addons-199708 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m17s  kubelet          Node addons-199708 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m17s  kubelet          Node addons-199708 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m14s  node-controller  Node addons-199708 event: Registered Node addons-199708 in Controller
	  Normal   NodeReady                8m27s  kubelet          Node addons-199708 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [926dc4caa041d397c3880a4325d8f356a972cfccd2a77902392d470e8a12ffc1] <==
	{"level":"info","ts":"2024-08-19T20:22:09.246144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T20:22:09.257804Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:09.257847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T20:22:23.595972Z","caller":"traceutil/trace.go:171","msg":"trace[1942552739] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"124.890398ms","start":"2024-08-19T20:22:23.471061Z","end":"2024-08-19T20:22:23.595951Z","steps":["trace[1942552739] 'process raft request'  (duration: 124.795867ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.284709Z","caller":"traceutil/trace.go:171","msg":"trace[597586118] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"182.545127ms","start":"2024-08-19T20:22:24.102019Z","end":"2024-08-19T20:22:24.284564Z","steps":["trace[597586118] 'process raft request'  (duration: 83.917065ms)","trace[597586118] 'compare'  (duration: 83.289589ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:22:24.299677Z","caller":"traceutil/trace.go:171","msg":"trace[1735036322] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"174.940037ms","start":"2024-08-19T20:22:24.124717Z","end":"2024-08-19T20:22:24.299657Z","steps":["trace[1735036322] 'process raft request'  (duration: 144.677744ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.301009Z","caller":"traceutil/trace.go:171","msg":"trace[383884208] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"176.07055ms","start":"2024-08-19T20:22:24.124910Z","end":"2024-08-19T20:22:24.300981Z","steps":["trace[383884208] 'process raft request'  (duration: 144.532638ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.301218Z","caller":"traceutil/trace.go:171","msg":"trace[824901047] linearizableReadLoop","detail":"{readStateIndex:380; appliedIndex:375; }","duration":"175.888372ms","start":"2024-08-19T20:22:24.125305Z","end":"2024-08-19T20:22:24.301193Z","steps":["trace[824901047] 'read index received'  (duration: 9.409254ms)","trace[824901047] 'applied index is now lower than readState.Index'  (duration: 166.477059ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T20:22:24.301730Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.360886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T20:22:24.309405Z","caller":"traceutil/trace.go:171","msg":"trace[1291458707] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:374; }","duration":"184.039025ms","start":"2024-08-19T20:22:24.125344Z","end":"2024-08-19T20:22:24.309383Z","steps":["trace[1291458707] 'agreement among raft nodes before linearized reading'  (duration: 176.320477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.310378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.247796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-6f6b679f8f\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-08-19T20:22:24.310483Z","caller":"traceutil/trace.go:171","msg":"trace[1568353333] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-6f6b679f8f; range_end:; response_count:1; response_revision:374; }","duration":"122.362306ms","start":"2024-08-19T20:22:24.188097Z","end":"2024-08-19T20:22:24.310459Z","steps":["trace[1568353333] 'agreement among raft nodes before linearized reading'  (duration: 122.220982ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.310663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.13259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T20:22:24.310720Z","caller":"traceutil/trace.go:171","msg":"trace[910454742] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:374; }","duration":"185.190649ms","start":"2024-08-19T20:22:24.125520Z","end":"2024-08-19T20:22:24.310710Z","steps":["trace[910454742] 'agreement among raft nodes before linearized reading'  (duration: 185.114489ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.314737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.322307ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T20:22:24.314841Z","caller":"traceutil/trace.go:171","msg":"trace[244805817] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:374; }","duration":"189.436432ms","start":"2024-08-19T20:22:24.125394Z","end":"2024-08-19T20:22:24.314830Z","steps":["trace[244805817] 'agreement among raft nodes before linearized reading'  (duration: 189.284302ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.315039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.659085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T20:22:24.315096Z","caller":"traceutil/trace.go:171","msg":"trace[1764991350] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:374; }","duration":"189.71702ms","start":"2024-08-19T20:22:24.125372Z","end":"2024-08-19T20:22:24.315089Z","steps":["trace[1764991350] 'agreement among raft nodes before linearized reading'  (duration: 189.64013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:22:24.331941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.054584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2024-08-19T20:22:24.332567Z","caller":"traceutil/trace.go:171","msg":"trace[1607455807] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:374; }","duration":"207.249162ms","start":"2024-08-19T20:22:24.125300Z","end":"2024-08-19T20:22:24.332549Z","steps":["trace[1607455807] 'agreement among raft nodes before linearized reading'  (duration: 205.062919ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:22:24.833660Z","caller":"traceutil/trace.go:171","msg":"trace[97244391] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"118.594486ms","start":"2024-08-19T20:22:24.715038Z","end":"2024-08-19T20:22:24.833632Z","steps":["trace[97244391] 'process raft request'  (duration: 23.061091ms)","trace[97244391] 'compare'  (duration: 94.885152ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:22:24.833896Z","caller":"traceutil/trace.go:171","msg":"trace[2075142307] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"118.810337ms","start":"2024-08-19T20:22:24.715077Z","end":"2024-08-19T20:22:24.833887Z","steps":["trace[2075142307] 'process raft request'  (duration: 118.015962ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:23:26.514570Z","caller":"traceutil/trace.go:171","msg":"trace[1521974339] transaction","detail":"{read_only:false; response_revision:964; number_of_response:1; }","duration":"104.149645ms","start":"2024-08-19T20:23:26.410403Z","end":"2024-08-19T20:23:26.514553Z","steps":["trace[1521974339] 'process raft request'  (duration: 32.518792ms)","trace[1521974339] 'compare'  (duration: 71.547891ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T20:23:26.521828Z","caller":"traceutil/trace.go:171","msg":"trace[1750390916] transaction","detail":"{read_only:false; response_revision:965; number_of_response:1; }","duration":"111.124673ms","start":"2024-08-19T20:23:26.410655Z","end":"2024-08-19T20:23:26.521780Z","steps":["trace[1750390916] 'process raft request'  (duration: 110.665705ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T20:23:26.522113Z","caller":"traceutil/trace.go:171","msg":"trace[1651835865] transaction","detail":"{read_only:false; response_revision:966; number_of_response:1; }","duration":"111.245426ms","start":"2024-08-19T20:23:26.410859Z","end":"2024-08-19T20:23:26.522105Z","steps":["trace[1651835865] 'process raft request'  (duration: 110.562994ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:31:32 up  4:13,  0 users,  load average: 0.59, 0.88, 1.85
	Linux addons-199708 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [6bdf6081a42b6a1f5ee894cd0d45bf4d184f481b1cd7cbd6cc01a0e3700332de] <==
	I0819 20:30:15.755395       1 main.go:299] handling current node
	I0819 20:30:25.754844       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:30:25.754880       1 main.go:299] handling current node
	W0819 20:30:26.657127       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:30:26.657250       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 20:30:35.754813       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:30:35.754849       1 main.go:299] handling current node
	W0819 20:30:39.804136       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:30:39.804171       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 20:30:45.755498       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:30:45.755648       1 main.go:299] handling current node
	W0819 20:30:53.317698       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:30:53.317735       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 20:30:55.754914       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:30:55.754952       1 main.go:299] handling current node
	I0819 20:31:05.755277       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:31:05.755313       1 main.go:299] handling current node
	I0819 20:31:15.755396       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:31:15.755432       1 main.go:299] handling current node
	W0819 20:31:20.755216       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:31:20.755257       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 20:31:21.382162       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:31:21.382197       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 20:31:25.755133       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:31:25.755169       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c5ee1a4b656858706d16e09f2577f1ceb0f47aabd974faa222453c787c1b7bd8] <==
	E0819 20:25:00.718417       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56708: use of closed network connection
	E0819 20:25:00.869386       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56724: use of closed network connection
	E0819 20:25:24.215473       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0819 20:25:25.909932       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 20:25:28.218427       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0819 20:25:47.243570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.243708       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.268157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.268289       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.300539       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.300602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.319762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.319807       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 20:25:47.356279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 20:25:47.356923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 20:25:48.319896       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 20:25:48.356627       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0819 20:25:48.367306       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0819 20:25:55.092159       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.86.69"}
	E0819 20:26:08.422200       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0819 20:26:42.627890       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 20:26:43.686416       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 20:26:48.204612       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 20:26:48.520054       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.183.38"}
	I0819 20:29:10.219063       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.57.173"}
	
	
	==> kube-controller-manager [17ec9f70f07aae6962f91d85b38bb77039cd2b084aa3c8faee6f57d6a8c3f529] <==
	W0819 20:29:18.612845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:18.612889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:29:20.043993       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:20.044060       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 20:29:22.648435       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0819 20:29:25.092269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-199708"
	W0819 20:29:37.092682       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:29:37.092806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:30:05.435058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:30:05.435104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:30:08.146439       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:30:08.146491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:30:12.395031       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:30:12.395119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:30:32.030734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:30:32.030889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:30:44.726140       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:30:44.726189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:31:00.935460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:31:00.935515       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:31:03.744164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:31:03.744210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 20:31:04.097271       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 20:31:04.097325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 20:31:30.537724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="8.616µs"
	
	
	==> kube-proxy [0e164c1098e699c8334f713a53dccc6fb785c5a533691496feb7bfbb3bc3fc57] <==
	I0819 20:22:23.616929       1 server_linux.go:66] "Using iptables proxy"
	I0819 20:22:25.762157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 20:22:25.762238       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 20:22:26.012653       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 20:22:26.012815       1 server_linux.go:169] "Using iptables Proxier"
	I0819 20:22:26.015461       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 20:22:26.016145       1 server.go:483] "Version info" version="v1.31.0"
	I0819 20:22:26.016222       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:22:26.027984       1 config.go:197] "Starting service config controller"
	I0819 20:22:26.028082       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 20:22:26.028135       1 config.go:104] "Starting endpoint slice config controller"
	I0819 20:22:26.028165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 20:22:26.028700       1 config.go:326] "Starting node config controller"
	I0819 20:22:26.028759       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 20:22:26.134197       1 shared_informer.go:320] Caches are synced for node config
	I0819 20:22:26.134362       1 shared_informer.go:320] Caches are synced for service config
	I0819 20:22:26.134433       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7f089b595eb71f4f444cfba1715195b143c2da503401429047e8f0059ded8ce0] <==
	W0819 20:22:13.217242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:22:13.217260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:22:13.217425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 20:22:13.217501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 20:22:13.217696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 20:22:13.217750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 20:22:13.217821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 20:22:13.217879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 20:22:13.217937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.217980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 20:22:13.217996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:22:13.218046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 20:22:13.218064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 20:22:14.805078       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 20:30:25 addons-199708 kubelet[1507]: E0819 20:30:25.189638    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099425189318480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:25 addons-199708 kubelet[1507]: E0819 20:30:25.189684    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099425189318480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:35 addons-199708 kubelet[1507]: E0819 20:30:35.192018    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099435191775835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:35 addons-199708 kubelet[1507]: E0819 20:30:35.192058    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099435191775835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:42 addons-199708 kubelet[1507]: I0819 20:30:42.875179    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 20:30:45 addons-199708 kubelet[1507]: E0819 20:30:45.195615    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099445195281622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:45 addons-199708 kubelet[1507]: E0819 20:30:45.195669    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099445195281622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:55 addons-199708 kubelet[1507]: E0819 20:30:55.198629    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099455198327471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:30:55 addons-199708 kubelet[1507]: E0819 20:30:55.198666    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099455198327471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:05 addons-199708 kubelet[1507]: E0819 20:31:05.201546    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099465201265118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:05 addons-199708 kubelet[1507]: E0819 20:31:05.201585    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099465201265118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:15 addons-199708 kubelet[1507]: E0819 20:31:15.204396    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099475204076359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:15 addons-199708 kubelet[1507]: E0819 20:31:15.204440    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099475204076359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:25 addons-199708 kubelet[1507]: E0819 20:31:25.208014    1507 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099485207727794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:25 addons-199708 kubelet[1507]: E0819 20:31:25.208057    1507 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724099485207727794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597209,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.889335    1507 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-tmp-dir\") pod \"9ff0d452-fc9c-4259-bc8e-032f3ad5350a\" (UID: \"9ff0d452-fc9c-4259-bc8e-032f3ad5350a\") "
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.889403    1507 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kksx6\" (UniqueName: \"kubernetes.io/projected/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-kube-api-access-kksx6\") pod \"9ff0d452-fc9c-4259-bc8e-032f3ad5350a\" (UID: \"9ff0d452-fc9c-4259-bc8e-032f3ad5350a\") "
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.890082    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9ff0d452-fc9c-4259-bc8e-032f3ad5350a" (UID: "9ff0d452-fc9c-4259-bc8e-032f3ad5350a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.895589    1507 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-kube-api-access-kksx6" (OuterVolumeSpecName: "kube-api-access-kksx6") pod "9ff0d452-fc9c-4259-bc8e-032f3ad5350a" (UID: "9ff0d452-fc9c-4259-bc8e-032f3ad5350a"). InnerVolumeSpecName "kube-api-access-kksx6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.989847    1507 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-tmp-dir\") on node \"addons-199708\" DevicePath \"\""
	Aug 19 20:31:31 addons-199708 kubelet[1507]: I0819 20:31:31.989896    1507 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kksx6\" (UniqueName: \"kubernetes.io/projected/9ff0d452-fc9c-4259-bc8e-032f3ad5350a-kube-api-access-kksx6\") on node \"addons-199708\" DevicePath \"\""
	Aug 19 20:31:32 addons-199708 kubelet[1507]: I0819 20:31:32.134949    1507 scope.go:117] "RemoveContainer" containerID="638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4"
	Aug 19 20:31:32 addons-199708 kubelet[1507]: I0819 20:31:32.167748    1507 scope.go:117] "RemoveContainer" containerID="638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4"
	Aug 19 20:31:32 addons-199708 kubelet[1507]: E0819 20:31:32.168451    1507 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4\": container with ID starting with 638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4 not found: ID does not exist" containerID="638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4"
	Aug 19 20:31:32 addons-199708 kubelet[1507]: I0819 20:31:32.168486    1507 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4"} err="failed to get container status \"638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4\": rpc error: code = NotFound desc = could not find container \"638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4\": container with ID starting with 638d90220b8aa10e80653003f5ead8060b2f0cf5d5783728a0184abd5e7369b4 not found: ID does not exist"
	
	
	==> storage-provisioner [572c16f1729497f7d94a754227c1c93424bdd957aa01d16528e4a906865cb8df] <==
	I0819 20:23:07.068670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 20:23:07.084179       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 20:23:07.084307       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 20:23:07.093879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 20:23:07.094187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069!
	I0819 20:23:07.094289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f018fd16-44f4-43d0-9569-d72317f64d49", APIVersion:"v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069 became leader
	I0819 20:23:07.197770       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-199708_2f18a78e-7a53-487e-ad83-a82b90cd4069!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-199708 -n addons-199708
helpers_test.go:261: (dbg) Run:  kubectl --context addons-199708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (321.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (137.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-876838 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 20:45:21.255662 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-876838 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m12.412405675s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-876838       NotReady   control-plane   11m     v1.31.0
	ha-876838-m02   Ready      control-plane   10m     v1.31.0
	ha-876838-m04   Ready      <none>          8m23s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-876838
helpers_test.go:235: (dbg) docker inspect ha-876838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6",
	        "Created": "2024-08-19T20:35:53.890178598Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1071923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T20:45:15.790240241Z",
	            "FinishedAt": "2024-08-19T20:45:14.867184451Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/hostname",
	        "HostsPath": "/var/lib/docker/containers/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/hosts",
	        "LogPath": "/var/lib/docker/containers/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6-json.log",
	        "Name": "/ha-876838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-876838:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-876838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9bb95d6c379705e919996021290e11eec11d40dddec5db9e6722c8c3c7f1f847-init/diff:/var/lib/docker/overlay2/9477ca3f94c975b8a19e34c7e6e216a8aaa21d9134153e903eb7147c449f54f5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bb95d6c379705e919996021290e11eec11d40dddec5db9e6722c8c3c7f1f847/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bb95d6c379705e919996021290e11eec11d40dddec5db9e6722c8c3c7f1f847/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bb95d6c379705e919996021290e11eec11d40dddec5db9e6722c8c3c7f1f847/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-876838",
	                "Source": "/var/lib/docker/volumes/ha-876838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-876838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-876838",
	                "name.minikube.sigs.k8s.io": "ha-876838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6368553cd2b1b920a13a9d4671d7336b52216bb2c083f4f7dc14b2df3fd8693b",
	            "SandboxKey": "/var/run/docker/netns/6368553cd2b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-876838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a203cb99320ea6b940a320c9c4519fa32186eb8e1c9d0464c3b4a86200c65fea",
	                    "EndpointID": "5473d44b778d556d8640ba2e48ab08d38a86bc8dfb4793f7d944a8405f612649",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-876838",
	                        "720c5e391f37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-876838 -n ha-876838
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 logs -n 25: (2.112289323s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-876838 cp ha-876838-m03:/home/docker/cp-test.txt                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04:/home/docker/cp-test_ha-876838-m03_ha-876838-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n ha-876838-m04 sudo cat                                          | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | /home/docker/cp-test_ha-876838-m03_ha-876838-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-876838 cp testdata/cp-test.txt                                                | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1216690093/001/cp-test_ha-876838-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838:/home/docker/cp-test_ha-876838-m04_ha-876838.txt                       |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n ha-876838 sudo cat                                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | /home/docker/cp-test_ha-876838-m04_ha-876838.txt                                 |           |         |         |                     |                     |
	| cp      | ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m02:/home/docker/cp-test_ha-876838-m04_ha-876838-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n ha-876838-m02 sudo cat                                          | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | /home/docker/cp-test_ha-876838-m04_ha-876838-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m03:/home/docker/cp-test_ha-876838-m04_ha-876838-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n                                                                 | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | ha-876838-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-876838 ssh -n ha-876838-m03 sudo cat                                          | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | /home/docker/cp-test_ha-876838-m04_ha-876838-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-876838 node stop m02 -v=7                                                     | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:39 UTC | 19 Aug 24 20:39 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-876838 node start m02 -v=7                                                    | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:40 UTC | 19 Aug 24 20:40 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-876838 -v=7                                                           | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:40 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-876838 -v=7                                                                | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:40 UTC | 19 Aug 24 20:41 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-876838 --wait=true -v=7                                                    | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:41 UTC | 19 Aug 24 20:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-876838                                                                | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:44 UTC |                     |
	| node    | ha-876838 node delete m03 -v=7                                                   | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:44 UTC | 19 Aug 24 20:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-876838 stop -v=7                                                              | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:44 UTC | 19 Aug 24 20:45 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-876838 --wait=true                                                         | ha-876838 | jenkins | v1.33.1 | 19 Aug 24 20:45 UTC | 19 Aug 24 20:47 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:45:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:45:15.311716 1071714 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:45:15.311954 1071714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:45:15.311982 1071714 out.go:358] Setting ErrFile to fd 2...
	I0819 20:45:15.312003 1071714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:45:15.312382 1071714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:45:15.312912 1071714 out.go:352] Setting JSON to false
	I0819 20:45:15.313956 1071714 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16057,"bootTime":1724084259,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:45:15.314086 1071714 start.go:139] virtualization:  
	I0819 20:45:15.317934 1071714 out.go:177] * [ha-876838] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:45:15.321437 1071714 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:45:15.321556 1071714 notify.go:220] Checking for updates...
	I0819 20:45:15.326872 1071714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:45:15.329541 1071714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:45:15.332252 1071714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:45:15.334768 1071714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:45:15.337515 1071714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:45:15.340708 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:15.341255 1071714 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:45:15.366671 1071714 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:45:15.366799 1071714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:45:15.421137 1071714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-19 20:45:15.411529127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:45:15.421255 1071714 docker.go:307] overlay module found
	I0819 20:45:15.424179 1071714 out.go:177] * Using the docker driver based on existing profile
	I0819 20:45:15.426793 1071714 start.go:297] selected driver: docker
	I0819 20:45:15.426818 1071714 start.go:901] validating driver "docker" against &{Name:ha-876838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:45:15.426986 1071714 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:45:15.427101 1071714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:45:15.479619 1071714 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-19 20:45:15.470149921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:45:15.480065 1071714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:45:15.480130 1071714 cni.go:84] Creating CNI manager for ""
	I0819 20:45:15.480144 1071714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 20:45:15.480198 1071714 start.go:340] cluster config:
	{Name:ha-876838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0819 20:45:15.484799 1071714 out.go:177] * Starting "ha-876838" primary control-plane node in "ha-876838" cluster
	I0819 20:45:15.487431 1071714 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:45:15.490162 1071714 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:45:15.492803 1071714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:45:15.492865 1071714 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 20:45:15.492875 1071714 cache.go:56] Caching tarball of preloaded images
	I0819 20:45:15.492910 1071714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:45:15.492961 1071714 preload.go:172] Found /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 20:45:15.492971 1071714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:45:15.493127 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	W0819 20:45:15.511743 1071714 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 20:45:15.511762 1071714 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:45:15.511848 1071714 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:45:15.511866 1071714 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:45:15.511870 1071714 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:45:15.511878 1071714 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:45:15.511884 1071714 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:45:15.513380 1071714 image.go:273] response: 
	I0819 20:45:15.640551 1071714 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:45:15.640593 1071714 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:45:15.640639 1071714 start.go:360] acquireMachinesLock for ha-876838: {Name:mka45a1239c4383dba37f94931d83adf70d31a1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:45:15.640715 1071714 start.go:364] duration metric: took 44.816µs to acquireMachinesLock for "ha-876838"
	I0819 20:45:15.640742 1071714 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:45:15.640753 1071714 fix.go:54] fixHost starting: 
	I0819 20:45:15.641050 1071714 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:45:15.657461 1071714 fix.go:112] recreateIfNeeded on ha-876838: state=Stopped err=<nil>
	W0819 20:45:15.657500 1071714 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:45:15.660581 1071714 out.go:177] * Restarting existing docker container for "ha-876838" ...
	I0819 20:45:15.663497 1071714 cli_runner.go:164] Run: docker start ha-876838
	I0819 20:45:15.956301 1071714 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:45:15.982556 1071714 kic.go:430] container "ha-876838" state is running.
	I0819 20:45:15.983051 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838
	I0819 20:45:16.006941 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	I0819 20:45:16.007373 1071714 machine.go:93] provisionDockerMachine start ...
	I0819 20:45:16.007462 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:16.028244 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:16.028525 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I0819 20:45:16.028535 1071714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:45:16.029193 1071714 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36958->127.0.0.1:33958: read: connection reset by peer
	I0819 20:45:19.164985 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838
	
	I0819 20:45:19.165012 1071714 ubuntu.go:169] provisioning hostname "ha-876838"
	I0819 20:45:19.165087 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:19.184893 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:19.185155 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I0819 20:45:19.185171 1071714 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-876838 && echo "ha-876838" | sudo tee /etc/hostname
	I0819 20:45:19.325912 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838
	
	I0819 20:45:19.326066 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:19.343169 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:19.343433 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I0819 20:45:19.343454 1071714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-876838' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-876838/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-876838' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:45:19.473752 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:45:19.473783 1071714 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1006087/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1006087/.minikube}
	I0819 20:45:19.473821 1071714 ubuntu.go:177] setting up certificates
	I0819 20:45:19.473830 1071714 provision.go:84] configureAuth start
	I0819 20:45:19.473903 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838
	I0819 20:45:19.490884 1071714 provision.go:143] copyHostCerts
	I0819 20:45:19.490940 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:45:19.490980 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem, removing ...
	I0819 20:45:19.490992 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:45:19.491074 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem (1082 bytes)
	I0819 20:45:19.491172 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:45:19.491203 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem, removing ...
	I0819 20:45:19.491212 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:45:19.491244 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem (1123 bytes)
	I0819 20:45:19.491306 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:45:19.491328 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem, removing ...
	I0819 20:45:19.491335 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:45:19.491371 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem (1675 bytes)
	I0819 20:45:19.491448 1071714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem org=jenkins.ha-876838 san=[127.0.0.1 192.168.49.2 ha-876838 localhost minikube]
	I0819 20:45:19.719783 1071714 provision.go:177] copyRemoteCerts
	I0819 20:45:19.719851 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:45:19.719894 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:19.739327 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:19.834550 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 20:45:19.834616 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 20:45:19.860358 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 20:45:19.860423 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 20:45:19.885354 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 20:45:19.885417 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:45:19.910916 1071714 provision.go:87] duration metric: took 437.066269ms to configureAuth
	I0819 20:45:19.910982 1071714 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:45:19.911236 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:19.911352 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:19.927951 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:19.928223 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33958 <nil> <nil>}
	I0819 20:45:19.928243 1071714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:45:20.389762 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:45:20.389832 1071714 machine.go:96] duration metric: took 4.382441417s to provisionDockerMachine
	I0819 20:45:20.389858 1071714 start.go:293] postStartSetup for "ha-876838" (driver="docker")
	I0819 20:45:20.389900 1071714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:45:20.390001 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:45:20.390096 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:20.417971 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:20.512469 1071714 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:45:20.515758 1071714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:45:20.515797 1071714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:45:20.515809 1071714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:45:20.515817 1071714 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:45:20.515828 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/addons for local assets ...
	I0819 20:45:20.515881 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/files for local assets ...
	I0819 20:45:20.515963 1071714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> 10114622.pem in /etc/ssl/certs
	I0819 20:45:20.515975 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /etc/ssl/certs/10114622.pem
	I0819 20:45:20.516076 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:45:20.524391 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:45:20.549672 1071714 start.go:296] duration metric: took 159.783537ms for postStartSetup
	I0819 20:45:20.549752 1071714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:45:20.549811 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:20.566651 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:20.654452 1071714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:45:20.659272 1071714 fix.go:56] duration metric: took 5.018512115s for fixHost
	I0819 20:45:20.659299 1071714 start.go:83] releasing machines lock for "ha-876838", held for 5.018569575s
	I0819 20:45:20.659370 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838
	I0819 20:45:20.676504 1071714 ssh_runner.go:195] Run: cat /version.json
	I0819 20:45:20.676555 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:20.676830 1071714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:45:20.676878 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:20.699033 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:20.701877 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:20.915813 1071714 ssh_runner.go:195] Run: systemctl --version
	I0819 20:45:20.920325 1071714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:45:21.066167 1071714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:45:21.071951 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:45:21.081816 1071714 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:45:21.081897 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:45:21.091755 1071714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 20:45:21.091783 1071714 start.go:495] detecting cgroup driver to use...
	I0819 20:45:21.091847 1071714 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:45:21.091918 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:45:21.105053 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:45:21.117284 1071714 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:45:21.117347 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:45:21.131113 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:45:21.143296 1071714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:45:21.230384 1071714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:45:21.313669 1071714 docker.go:233] disabling docker service ...
	I0819 20:45:21.313774 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:45:21.326660 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:45:21.338752 1071714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:45:21.420917 1071714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:45:21.505909 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:45:21.517860 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:45:21.535626 1071714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:45:21.535693 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.546556 1071714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:45:21.546627 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.557332 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.568241 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.579041 1071714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:45:21.589211 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.600229 1071714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.610084 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:21.619932 1071714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:45:21.628392 1071714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:45:21.636970 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:45:21.730320 1071714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:45:21.840185 1071714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:45:21.840307 1071714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:45:21.844235 1071714 start.go:563] Will wait 60s for crictl version
	I0819 20:45:21.844336 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:45:21.848530 1071714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:45:21.890420 1071714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 20:45:21.890525 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:45:21.928971 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:45:21.972993 1071714 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 20:45:21.975573 1071714 cli_runner.go:164] Run: docker network inspect ha-876838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:45:21.991274 1071714 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:45:21.995016 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:45:22.007752 1071714 kubeadm.go:883] updating cluster {Name:ha-876838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 20:45:22.007933 1071714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:45:22.008012 1071714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:45:22.062755 1071714 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:45:22.062782 1071714 crio.go:433] Images already preloaded, skipping extraction
	I0819 20:45:22.062862 1071714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 20:45:22.102610 1071714 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 20:45:22.102646 1071714 cache_images.go:84] Images are preloaded, skipping loading
	I0819 20:45:22.102656 1071714 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 20:45:22.102771 1071714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-876838 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:45:22.102860 1071714 ssh_runner.go:195] Run: crio config
	I0819 20:45:22.152136 1071714 cni.go:84] Creating CNI manager for ""
	I0819 20:45:22.152159 1071714 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 20:45:22.152170 1071714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 20:45:22.152191 1071714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-876838 NodeName:ha-876838 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 20:45:22.152351 1071714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-876838"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 20:45:22.152378 1071714 kube-vip.go:115] generating kube-vip config ...
	I0819 20:45:22.152428 1071714 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0819 20:45:22.165230 1071714 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 20:45:22.165369 1071714 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 20:45:22.165437 1071714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:45:22.175326 1071714 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:45:22.175403 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 20:45:22.184365 1071714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0819 20:45:22.202747 1071714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:45:22.220892 1071714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0819 20:45:22.239942 1071714 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 20:45:22.258805 1071714 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 20:45:22.262753 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:45:22.274125 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:45:22.367122 1071714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:45:22.381882 1071714 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838 for IP: 192.168.49.2
	I0819 20:45:22.381908 1071714 certs.go:194] generating shared ca certs ...
	I0819 20:45:22.381937 1071714 certs.go:226] acquiring lock for ca certs: {Name:mka0619a4a0da3f790025b70d844d99358d748e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:22.382074 1071714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key
	I0819 20:45:22.382124 1071714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key
	I0819 20:45:22.382135 1071714 certs.go:256] generating profile certs ...
	I0819 20:45:22.382218 1071714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.key
	I0819 20:45:22.382264 1071714 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key.749b911d
	I0819 20:45:22.382279 1071714 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt.749b911d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0819 20:45:22.666415 1071714 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt.749b911d ...
	I0819 20:45:22.666451 1071714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt.749b911d: {Name:mk8adddb724ceaab565774a40e0c397c0c9a16e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:22.666650 1071714 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key.749b911d ...
	I0819 20:45:22.666674 1071714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key.749b911d: {Name:mk23b872dd27dbe07e35025c94b1c3dbfcdfa09b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:22.666770 1071714 certs.go:381] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt.749b911d -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt
	I0819 20:45:22.666920 1071714 certs.go:385] copying /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key.749b911d -> /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key
	I0819 20:45:22.667064 1071714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key
	I0819 20:45:22.667084 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 20:45:22.667102 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 20:45:22.667123 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 20:45:22.667143 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 20:45:22.667158 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 20:45:22.667173 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 20:45:22.667185 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 20:45:22.667198 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 20:45:22.667249 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem (1338 bytes)
	W0819 20:45:22.667281 1071714 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462_empty.pem, impossibly tiny 0 bytes
	I0819 20:45:22.667295 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 20:45:22.667321 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:45:22.667348 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:45:22.667372 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem (1675 bytes)
	I0819 20:45:22.667419 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:45:22.667450 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:22.667467 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem -> /usr/share/ca-certificates/1011462.pem
	I0819 20:45:22.667488 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /usr/share/ca-certificates/10114622.pem
	I0819 20:45:22.668158 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:45:22.700154 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:45:22.725215 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:45:22.750524 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:45:22.777183 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 20:45:22.803475 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:45:22.829227 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:45:22.854265 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:45:22.879924 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:45:22.905125 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem --> /usr/share/ca-certificates/1011462.pem (1338 bytes)
	I0819 20:45:22.930162 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /usr/share/ca-certificates/10114622.pem (1708 bytes)
	I0819 20:45:22.954607 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 20:45:22.973104 1071714 ssh_runner.go:195] Run: openssl version
	I0819 20:45:22.980321 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1011462.pem && ln -fs /usr/share/ca-certificates/1011462.pem /etc/ssl/certs/1011462.pem"
	I0819 20:45:22.991261 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1011462.pem
	I0819 20:45:22.995141 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 20:32 /usr/share/ca-certificates/1011462.pem
	I0819 20:45:22.995219 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1011462.pem
	I0819 20:45:23.002255 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1011462.pem /etc/ssl/certs/51391683.0"
	I0819 20:45:23.015002 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10114622.pem && ln -fs /usr/share/ca-certificates/10114622.pem /etc/ssl/certs/10114622.pem"
	I0819 20:45:23.025543 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10114622.pem
	I0819 20:45:23.029252 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 20:32 /usr/share/ca-certificates/10114622.pem
	I0819 20:45:23.029347 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10114622.pem
	I0819 20:45:23.036920 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10114622.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:45:23.046484 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:45:23.056508 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:23.061225 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:23.061343 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:23.068775 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:45:23.077814 1071714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:45:23.081426 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:45:23.088453 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:45:23.095921 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:45:23.102906 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:45:23.109705 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:45:23.116504 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:45:23.124448 1071714 kubeadm.go:392] StartCluster: {Name:ha-876838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:45:23.124587 1071714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 20:45:23.124652 1071714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 20:45:23.162029 1071714 cri.go:89] found id: ""
	I0819 20:45:23.162105 1071714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 20:45:23.171127 1071714 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 20:45:23.171199 1071714 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 20:45:23.171264 1071714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 20:45:23.179894 1071714 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 20:45:23.180388 1071714 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-876838" does not appear in /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:45:23.180493 1071714 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-1006087/kubeconfig needs updating (will repair): [kubeconfig missing "ha-876838" cluster setting kubeconfig missing "ha-876838" context setting]
	I0819 20:45:23.180767 1071714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/kubeconfig: {Name:mk82300af76d6335c7b97db5d9d0a0f9960b80de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:23.181172 1071714 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:45:23.181429 1071714 kapi.go:59] client config for ha-876838: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.key", CAFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cb7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 20:45:23.182239 1071714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 20:45:23.182309 1071714 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 20:45:23.191477 1071714 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0819 20:45:23.191540 1071714 kubeadm.go:597] duration metric: took 20.328323ms to restartPrimaryControlPlane
	I0819 20:45:23.191589 1071714 kubeadm.go:394] duration metric: took 67.152061ms to StartCluster
	I0819 20:45:23.191612 1071714 settings.go:142] acquiring lock: {Name:mk3a0c8d8afbf5cfbc8b518d1bda35579f7cba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:23.191674 1071714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:45:23.192312 1071714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1006087/kubeconfig: {Name:mk82300af76d6335c7b97db5d9d0a0f9960b80de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:23.192520 1071714 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:45:23.192547 1071714 start.go:241] waiting for startup goroutines ...
	I0819 20:45:23.192555 1071714 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 20:45:23.193075 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:23.197153 1071714 out.go:177] * Enabled addons: 
	I0819 20:45:23.199621 1071714 addons.go:510] duration metric: took 7.058335ms for enable addons: enabled=[]
	I0819 20:45:23.199665 1071714 start.go:246] waiting for cluster config update ...
	I0819 20:45:23.199677 1071714 start.go:255] writing updated cluster config ...
	I0819 20:45:23.202392 1071714 out.go:201] 
	I0819 20:45:23.204959 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:23.205068 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	I0819 20:45:23.208009 1071714 out.go:177] * Starting "ha-876838-m02" control-plane node in "ha-876838" cluster
	I0819 20:45:23.210635 1071714 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:45:23.213353 1071714 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:45:23.215920 1071714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:45:23.215957 1071714 cache.go:56] Caching tarball of preloaded images
	I0819 20:45:23.216016 1071714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:45:23.216054 1071714 preload.go:172] Found /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 20:45:23.216072 1071714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:45:23.216206 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	W0819 20:45:23.236709 1071714 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 20:45:23.236738 1071714 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:45:23.236818 1071714 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:45:23.236847 1071714 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:45:23.236855 1071714 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:45:23.236864 1071714 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:45:23.236870 1071714 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:45:23.238164 1071714 image.go:273] response: 
	I0819 20:45:23.359911 1071714 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:45:23.359955 1071714 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:45:23.359987 1071714 start.go:360] acquireMachinesLock for ha-876838-m02: {Name:mkba7b4712a52d6e8c60212cba300ee7cec664a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:45:23.360057 1071714 start.go:364] duration metric: took 46.629µs to acquireMachinesLock for "ha-876838-m02"
	I0819 20:45:23.360081 1071714 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:45:23.360090 1071714 fix.go:54] fixHost starting: m02
	I0819 20:45:23.360366 1071714 cli_runner.go:164] Run: docker container inspect ha-876838-m02 --format={{.State.Status}}
	I0819 20:45:23.376245 1071714 fix.go:112] recreateIfNeeded on ha-876838-m02: state=Stopped err=<nil>
	W0819 20:45:23.376273 1071714 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:45:23.380998 1071714 out.go:177] * Restarting existing docker container for "ha-876838-m02" ...
	I0819 20:45:23.383504 1071714 cli_runner.go:164] Run: docker start ha-876838-m02
	I0819 20:45:23.680525 1071714 cli_runner.go:164] Run: docker container inspect ha-876838-m02 --format={{.State.Status}}
	I0819 20:45:23.698041 1071714 kic.go:430] container "ha-876838-m02" state is running.
	I0819 20:45:23.698521 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m02
	I0819 20:45:23.719943 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	I0819 20:45:23.720389 1071714 machine.go:93] provisionDockerMachine start ...
	I0819 20:45:23.721019 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:23.752799 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:23.753438 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I0819 20:45:23.753463 1071714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:45:23.754563 1071714 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0819 20:45:26.934774 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838-m02
	
	I0819 20:45:26.934797 1071714 ubuntu.go:169] provisioning hostname "ha-876838-m02"
	I0819 20:45:26.934866 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:26.996985 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:26.997252 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I0819 20:45:26.997270 1071714 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-876838-m02 && echo "ha-876838-m02" | sudo tee /etc/hostname
	I0819 20:45:27.219378 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838-m02
	
	I0819 20:45:27.219502 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:27.258631 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:27.258869 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I0819 20:45:27.258884 1071714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-876838-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-876838-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-876838-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:45:27.453968 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:45:27.454055 1071714 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1006087/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1006087/.minikube}
	I0819 20:45:27.454093 1071714 ubuntu.go:177] setting up certificates
	I0819 20:45:27.454119 1071714 provision.go:84] configureAuth start
	I0819 20:45:27.454192 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m02
	I0819 20:45:27.487255 1071714 provision.go:143] copyHostCerts
	I0819 20:45:27.487295 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:45:27.487330 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem, removing ...
	I0819 20:45:27.487337 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:45:27.487410 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem (1082 bytes)
	I0819 20:45:27.487483 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:45:27.487499 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem, removing ...
	I0819 20:45:27.487504 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:45:27.487528 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem (1123 bytes)
	I0819 20:45:27.487567 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:45:27.487582 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem, removing ...
	I0819 20:45:27.487586 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:45:27.487608 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem (1675 bytes)
	I0819 20:45:27.487652 1071714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem org=jenkins.ha-876838-m02 san=[127.0.0.1 192.168.49.3 ha-876838-m02 localhost minikube]
	I0819 20:45:27.824912 1071714 provision.go:177] copyRemoteCerts
	I0819 20:45:27.825004 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:45:27.825066 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:27.843716 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m02/id_rsa Username:docker}
	I0819 20:45:27.977181 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 20:45:27.977247 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:45:28.053029 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 20:45:28.053095 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 20:45:28.123616 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 20:45:28.123682 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 20:45:28.190526 1071714 provision.go:87] duration metric: took 736.379349ms to configureAuth
	I0819 20:45:28.190551 1071714 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:45:28.190785 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:28.190888 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:28.212401 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:45:28.212652 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33963 <nil> <nil>}
	I0819 20:45:28.212671 1071714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:45:28.651824 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:45:28.651885 1071714 machine.go:96] duration metric: took 4.931484816s to provisionDockerMachine
	I0819 20:45:28.651913 1071714 start.go:293] postStartSetup for "ha-876838-m02" (driver="docker")
	I0819 20:45:28.651959 1071714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:45:28.652057 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:45:28.652119 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:28.682274 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m02/id_rsa Username:docker}
	I0819 20:45:28.863018 1071714 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:45:28.869090 1071714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:45:28.869129 1071714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:45:28.869141 1071714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:45:28.869150 1071714 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:45:28.869161 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/addons for local assets ...
	I0819 20:45:28.869219 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/files for local assets ...
	I0819 20:45:28.869300 1071714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> 10114622.pem in /etc/ssl/certs
	I0819 20:45:28.869312 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /etc/ssl/certs/10114622.pem
	I0819 20:45:28.869416 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:45:28.894873 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:45:28.968118 1071714 start.go:296] duration metric: took 316.155407ms for postStartSetup
	I0819 20:45:28.968224 1071714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:45:28.968288 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:29.004176 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m02/id_rsa Username:docker}
	I0819 20:45:29.183877 1071714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:45:29.195515 1071714 fix.go:56] duration metric: took 5.835417383s for fixHost
	I0819 20:45:29.195544 1071714 start.go:83] releasing machines lock for "ha-876838-m02", held for 5.83547472s
	I0819 20:45:29.195620 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m02
	I0819 20:45:29.225373 1071714 out.go:177] * Found network options:
	I0819 20:45:29.228525 1071714 out.go:177]   - NO_PROXY=192.168.49.2
	W0819 20:45:29.231370 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 20:45:29.231409 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 20:45:29.231473 1071714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:45:29.231519 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:29.231748 1071714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:45:29.231800 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m02
	I0819 20:45:29.269108 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m02/id_rsa Username:docker}
	I0819 20:45:29.269843 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33963 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m02/id_rsa Username:docker}
	I0819 20:45:29.738526 1071714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:45:29.750186 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:45:29.797191 1071714 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:45:29.797313 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:45:29.831234 1071714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 20:45:29.831307 1071714 start.go:495] detecting cgroup driver to use...
	I0819 20:45:29.831352 1071714 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:45:29.831433 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:45:29.874409 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:45:29.904062 1071714 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:45:29.904181 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:45:29.943034 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:45:29.965649 1071714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:45:30.294968 1071714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:45:30.620506 1071714 docker.go:233] disabling docker service ...
	I0819 20:45:30.620631 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:45:30.654324 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:45:30.698178 1071714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:45:30.989663 1071714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:45:31.267385 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:45:31.339804 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:45:31.414192 1071714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:45:31.414352 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.461249 1071714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:45:31.461381 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.487691 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.525353 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.566278 1071714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:45:31.618075 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.661047 1071714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.719509 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:45:31.766342 1071714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:45:31.803932 1071714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:45:31.839843 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:45:32.143979 1071714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:45:33.862415 1071714 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.718353116s)
	I0819 20:45:33.862498 1071714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:45:33.862596 1071714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:45:33.878153 1071714 start.go:563] Will wait 60s for crictl version
	I0819 20:45:33.878297 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:45:33.882113 1071714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:45:33.956451 1071714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 20:45:33.956613 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:45:34.032541 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:45:34.113235 1071714 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 20:45:34.114813 1071714 out.go:177]   - env NO_PROXY=192.168.49.2
	I0819 20:45:34.116425 1071714 cli_runner.go:164] Run: docker network inspect ha-876838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:45:34.140913 1071714 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:45:34.144638 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:45:34.163626 1071714 mustload.go:65] Loading cluster: ha-876838
	I0819 20:45:34.163865 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:34.164175 1071714 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:45:34.189644 1071714 host.go:66] Checking if "ha-876838" exists ...
	I0819 20:45:34.189928 1071714 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838 for IP: 192.168.49.3
	I0819 20:45:34.189943 1071714 certs.go:194] generating shared ca certs ...
	I0819 20:45:34.189962 1071714 certs.go:226] acquiring lock for ca certs: {Name:mka0619a4a0da3f790025b70d844d99358d748e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:45:34.190100 1071714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key
	I0819 20:45:34.190149 1071714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key
	I0819 20:45:34.190162 1071714 certs.go:256] generating profile certs ...
	I0819 20:45:34.190252 1071714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.key
	I0819 20:45:34.190313 1071714 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key.19d3a231
	I0819 20:45:34.190355 1071714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key
	I0819 20:45:34.190369 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 20:45:34.190384 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 20:45:34.190400 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 20:45:34.190411 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 20:45:34.190435 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 20:45:34.190452 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 20:45:34.190469 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 20:45:34.190480 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 20:45:34.190537 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem (1338 bytes)
	W0819 20:45:34.190570 1071714 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462_empty.pem, impossibly tiny 0 bytes
	I0819 20:45:34.190583 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 20:45:34.190607 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:45:34.190635 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:45:34.190661 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem (1675 bytes)
	I0819 20:45:34.190712 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:45:34.190746 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem -> /usr/share/ca-certificates/1011462.pem
	I0819 20:45:34.190761 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /usr/share/ca-certificates/10114622.pem
	I0819 20:45:34.190773 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:34.190832 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:45:34.221702 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33958 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:45:34.333881 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 20:45:34.344637 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 20:45:34.373890 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 20:45:34.384658 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 20:45:34.418395 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 20:45:34.436149 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 20:45:34.471355 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 20:45:34.484695 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 20:45:34.516474 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 20:45:34.521394 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 20:45:34.535369 1071714 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 20:45:34.543938 1071714 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 20:45:34.561617 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:45:34.611198 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:45:34.655140 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:45:34.691521 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:45:34.727735 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 20:45:34.831130 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 20:45:34.918480 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 20:45:34.954264 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0819 20:45:34.995261 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem --> /usr/share/ca-certificates/1011462.pem (1338 bytes)
	I0819 20:45:35.039909 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /usr/share/ca-certificates/10114622.pem (1708 bytes)
	I0819 20:45:35.078988 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:45:35.114158 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 20:45:35.141814 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 20:45:35.167914 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 20:45:35.189322 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 20:45:35.217710 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 20:45:35.252018 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 20:45:35.273622 1071714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 20:45:35.296679 1071714 ssh_runner.go:195] Run: openssl version
	I0819 20:45:35.303355 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:45:35.316992 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:35.320945 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:35.321030 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:45:35.328860 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:45:35.339077 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1011462.pem && ln -fs /usr/share/ca-certificates/1011462.pem /etc/ssl/certs/1011462.pem"
	I0819 20:45:35.352564 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1011462.pem
	I0819 20:45:35.358981 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 20:32 /usr/share/ca-certificates/1011462.pem
	I0819 20:45:35.359081 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1011462.pem
	I0819 20:45:35.366919 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1011462.pem /etc/ssl/certs/51391683.0"
	I0819 20:45:35.377122 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10114622.pem && ln -fs /usr/share/ca-certificates/10114622.pem /etc/ssl/certs/10114622.pem"
	I0819 20:45:35.393151 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10114622.pem
	I0819 20:45:35.397126 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 20:32 /usr/share/ca-certificates/10114622.pem
	I0819 20:45:35.397222 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10114622.pem
	I0819 20:45:35.409939 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10114622.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:45:35.422624 1071714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:45:35.427071 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 20:45:35.438447 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 20:45:35.447555 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 20:45:35.460499 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 20:45:35.470680 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 20:45:35.478160 1071714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 20:45:35.486075 1071714 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0819 20:45:35.486219 1071714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-876838-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:45:35.486287 1071714 kube-vip.go:115] generating kube-vip config ...
	I0819 20:45:35.486341 1071714 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0819 20:45:35.501023 1071714 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0819 20:45:35.501109 1071714 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 20:45:35.501195 1071714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:45:35.511927 1071714 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:45:35.512031 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 20:45:35.521546 1071714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 20:45:35.541719 1071714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:45:35.564196 1071714 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0819 20:45:35.590592 1071714 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 20:45:35.599106 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:45:35.611196 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:45:35.833621 1071714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:45:35.855496 1071714 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 20:45:35.855916 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:35.860512 1071714 out.go:177] * Verifying Kubernetes components...
	I0819 20:45:35.862980 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:45:35.995811 1071714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:45:36.028423 1071714 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:45:36.028723 1071714 kapi.go:59] client config for ha-876838: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.key", CAFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cb7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 20:45:36.028786 1071714 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0819 20:45:36.029015 1071714 node_ready.go:35] waiting up to 6m0s for node "ha-876838-m02" to be "Ready" ...
	I0819 20:45:36.029098 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:36.029104 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:36.029113 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:36.029118 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:47.805788 1071714 round_trippers.go:574] Response Status: 500 Internal Server Error in 11776 milliseconds
	I0819 20:45:47.806032 1071714 node_ready.go:53] error getting node "ha-876838-m02": etcdserver: request timed out
	I0819 20:45:47.806085 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:47.806090 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:47.806098 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:47.806101 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:57.815178 1071714 round_trippers.go:574] Response Status: 500 Internal Server Error in 10009 milliseconds
	I0819 20:45:57.815532 1071714 node_ready.go:53] error getting node "ha-876838-m02": etcdserver: leader changed
	I0819 20:45:57.815603 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:57.815609 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:57.815617 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:57.815621 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:57.831184 1071714 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0819 20:45:57.832373 1071714 node_ready.go:49] node "ha-876838-m02" has status "Ready":"True"
	I0819 20:45:57.832393 1071714 node_ready.go:38] duration metric: took 21.803360541s for node "ha-876838-m02" to be "Ready" ...
	I0819 20:45:57.832403 1071714 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:45:57.832444 1071714 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 20:45:57.832454 1071714 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 20:45:57.832513 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:45:57.832518 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:57.832525 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:57.832530 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:57.838310 1071714 round_trippers.go:574] Response Status: 429 Too Many Requests in 5 milliseconds
	I0819 20:45:58.838577 1071714 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:45:58.838626 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:45:58.838631 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.838640 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.838644 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.860416 1071714 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0819 20:45:58.875541 1071714 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.875742 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:45:58.875770 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.875796 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.875819 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.887435 1071714 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0819 20:45:58.888504 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:45:58.888563 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.888587 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.888609 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.891374 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.892128 1071714 pod_ready.go:93] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:58.892152 1071714 pod_ready.go:82] duration metric: took 16.529319ms for pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.892167 1071714 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.892246 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m4zj2
	I0819 20:45:58.892251 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.892260 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.892266 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.895290 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:45:58.896256 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:45:58.896275 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.896284 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.896290 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.899063 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.900072 1071714 pod_ready.go:93] pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:58.900097 1071714 pod_ready.go:82] duration metric: took 7.92191ms for pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.900110 1071714 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.900182 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-876838
	I0819 20:45:58.900194 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.900203 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.900207 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.903122 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.903789 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:45:58.903805 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.903814 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.903818 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.906826 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.907670 1071714 pod_ready.go:93] pod "etcd-ha-876838" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:58.907724 1071714 pod_ready.go:82] duration metric: took 7.605473ms for pod "etcd-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.907751 1071714 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.907870 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-876838-m02
	I0819 20:45:58.907899 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.907922 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.907946 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.910829 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.911776 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:58.911792 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.911801 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.911804 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.914416 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:58.915205 1071714 pod_ready.go:93] pod "etcd-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:58.915251 1071714 pod_ready.go:82] duration metric: took 7.478754ms for pod "etcd-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.915293 1071714 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:58.915410 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-876838-m03
	I0819 20:45:58.915456 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:58.915478 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:58.915498 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:58.919367 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:45:59.039639 1071714 request.go:632] Waited for 119.25946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:45:59.039741 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:45:59.039753 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:59.039762 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:59.039768 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:59.042923 1071714 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 20:45:59.043345 1071714 pod_ready.go:98] node "ha-876838-m03" hosting pod "etcd-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:45:59.043420 1071714 pod_ready.go:82] duration metric: took 128.101666ms for pod "etcd-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	E0819 20:45:59.043468 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838-m03" hosting pod "etcd-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:45:59.043521 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:59.238793 1071714 request.go:632] Waited for 195.153567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838
	I0819 20:45:59.238898 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838
	I0819 20:45:59.238963 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:59.238990 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:59.239009 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:59.246534 1071714 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 20:45:59.439460 1071714 request.go:632] Waited for 192.139231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:45:59.439520 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:45:59.439531 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:59.439559 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:59.439570 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:59.442286 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:45:59.443304 1071714 pod_ready.go:93] pod "kube-apiserver-ha-876838" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:59.443336 1071714 pod_ready.go:82] duration metric: took 399.768749ms for pod "kube-apiserver-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:59.443348 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:59.638654 1071714 request.go:632] Waited for 195.230244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m02
	I0819 20:45:59.638719 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m02
	I0819 20:45:59.638731 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:59.638741 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:59.638751 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:59.641799 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:45:59.839359 1071714 request.go:632] Waited for 196.297928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:59.839418 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:45:59.839428 1071714 round_trippers.go:469] Request Headers:
	I0819 20:45:59.839455 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:45:59.839466 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:45:59.843181 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:45:59.844230 1071714 pod_ready.go:93] pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:45:59.844254 1071714 pod_ready.go:82] duration metric: took 400.897578ms for pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:45:59.844267 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:00.038619 1071714 request.go:632] Waited for 194.25366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m03
	I0819 20:46:00.038693 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m03
	I0819 20:46:00.038702 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:00.038712 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:00.038721 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:00.047868 1071714 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 20:46:00.238575 1071714 request.go:632] Waited for 189.279273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:00.238676 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:00.238688 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:00.238697 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:00.238707 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:00.242098 1071714 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 20:46:00.242439 1071714 pod_ready.go:98] node "ha-876838-m03" hosting pod "kube-apiserver-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:00.242469 1071714 pod_ready.go:82] duration metric: took 398.193591ms for pod "kube-apiserver-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	E0819 20:46:00.242484 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838-m03" hosting pod "kube-apiserver-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:00.242499 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:00.438572 1071714 request.go:632] Waited for 195.992503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838
	I0819 20:46:00.438655 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838
	I0819 20:46:00.438666 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:00.438675 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:00.438687 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:00.447431 1071714 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 20:46:00.639139 1071714 request.go:632] Waited for 190.319085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:00.639212 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:00.639224 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:00.639233 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:00.639236 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:00.642231 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:00.643122 1071714 pod_ready.go:93] pod "kube-controller-manager-ha-876838" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:00.643148 1071714 pod_ready.go:82] duration metric: took 400.63742ms for pod "kube-controller-manager-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:00.643161 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:00.838691 1071714 request.go:632] Waited for 195.441147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m02
	I0819 20:46:00.838754 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m02
	I0819 20:46:00.838761 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:00.838770 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:00.838775 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:00.841669 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:01.038821 1071714 request.go:632] Waited for 196.250716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:01.038873 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:01.038878 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:01.038886 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:01.038891 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:01.041721 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:01.042489 1071714 pod_ready.go:93] pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:01.042511 1071714 pod_ready.go:82] duration metric: took 399.341544ms for pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:01.042542 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:01.239476 1071714 request.go:632] Waited for 196.842703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m03
	I0819 20:46:01.239577 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m03
	I0819 20:46:01.239592 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:01.239601 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:01.239605 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:01.242598 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:01.439543 1071714 request.go:632] Waited for 196.203701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:01.439664 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:01.439693 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:01.439721 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:01.439755 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:01.442397 1071714 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 20:46:01.442554 1071714 pod_ready.go:98] node "ha-876838-m03" hosting pod "kube-controller-manager-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:01.442592 1071714 pod_ready.go:82] duration metric: took 400.03443ms for pod "kube-controller-manager-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	E0819 20:46:01.442618 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838-m03" hosting pod "kube-controller-manager-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:01.442646 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6lm2" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:01.639156 1071714 request.go:632] Waited for 196.401467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6lm2
	I0819 20:46:01.639255 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6lm2
	I0819 20:46:01.639285 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:01.639308 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:01.639333 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:01.642636 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:46:01.838586 1071714 request.go:632] Waited for 195.262564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:01.838668 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:01.838680 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:01.838689 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:01.838697 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:01.841483 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:01.842158 1071714 pod_ready.go:93] pod "kube-proxy-d6lm2" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:01.842183 1071714 pod_ready.go:82] duration metric: took 399.510192ms for pod "kube-proxy-d6lm2" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:01.842237 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hpt5s" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:02.038557 1071714 request.go:632] Waited for 196.242946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hpt5s
	I0819 20:46:02.038685 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hpt5s
	I0819 20:46:02.038725 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:02.038752 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:02.038783 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:02.044962 1071714 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 20:46:02.239296 1071714 request.go:632] Waited for 193.344768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:02.239361 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:02.239366 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:02.239376 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:02.239384 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:02.242046 1071714 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 20:46:02.242165 1071714 pod_ready.go:98] node "ha-876838-m03" hosting pod "kube-proxy-hpt5s" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:02.242188 1071714 pod_ready.go:82] duration metric: took 399.934024ms for pod "kube-proxy-hpt5s" in "kube-system" namespace to be "Ready" ...
	E0819 20:46:02.242198 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838-m03" hosting pod "kube-proxy-hpt5s" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:02.242242 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lvqhn" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:02.438779 1071714 request.go:632] Waited for 196.447843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvqhn
	I0819 20:46:02.438842 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvqhn
	I0819 20:46:02.438852 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:02.438861 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:02.438874 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:02.441786 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:02.639258 1071714 request.go:632] Waited for 196.155135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:02.639319 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:02.639325 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:02.639333 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:02.639337 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:02.642020 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:02.642652 1071714 pod_ready.go:93] pod "kube-proxy-lvqhn" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:02.642704 1071714 pod_ready.go:82] duration metric: took 400.451674ms for pod "kube-proxy-lvqhn" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:02.642732 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n6xdk" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:02.839080 1071714 request.go:632] Waited for 196.261834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6xdk
	I0819 20:46:02.839139 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6xdk
	I0819 20:46:02.839150 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:02.839160 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:02.839169 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:02.842799 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:46:03.039211 1071714 request.go:632] Waited for 195.300873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:03.039331 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:03.039354 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:03.039365 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:03.039379 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:03.044247 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:46:03.044957 1071714 pod_ready.go:93] pod "kube-proxy-n6xdk" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:03.044979 1071714 pod_ready.go:82] duration metric: took 402.218848ms for pod "kube-proxy-n6xdk" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:03.044992 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:03.239522 1071714 request.go:632] Waited for 194.430061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838
	I0819 20:46:03.239589 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838
	I0819 20:46:03.239599 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:03.239609 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:03.239613 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:03.242536 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:03.439513 1071714 request.go:632] Waited for 196.336402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:03.439567 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:46:03.439573 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:03.439582 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:03.439589 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:03.442258 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:03.442806 1071714 pod_ready.go:93] pod "kube-scheduler-ha-876838" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:03.442828 1071714 pod_ready.go:82] duration metric: took 397.828309ms for pod "kube-scheduler-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:03.442839 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:03.638770 1071714 request.go:632] Waited for 195.827638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m02
	I0819 20:46:03.638837 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m02
	I0819 20:46:03.638849 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:03.638858 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:03.638867 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:03.645899 1071714 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 20:46:03.839224 1071714 request.go:632] Waited for 192.312588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:03.839279 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:46:03.839289 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:03.839308 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:03.839316 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:03.847588 1071714 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 20:46:03.848296 1071714 pod_ready.go:93] pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:46:03.848317 1071714 pod_ready.go:82] duration metric: took 405.46743ms for pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:03.848330 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	I0819 20:46:04.038656 1071714 request.go:632] Waited for 190.237518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m03
	I0819 20:46:04.038722 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m03
	I0819 20:46:04.038733 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:04.038742 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:04.038752 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:04.043347 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:46:04.238604 1071714 request.go:632] Waited for 194.247194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:04.238668 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m03
	I0819 20:46:04.238674 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:04.238681 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:04.238688 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:04.243753 1071714 round_trippers.go:574] Response Status: 404 Not Found in 5 milliseconds
	I0819 20:46:04.244015 1071714 pod_ready.go:98] node "ha-876838-m03" hosting pod "kube-scheduler-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:04.244039 1071714 pod_ready.go:82] duration metric: took 395.701096ms for pod "kube-scheduler-ha-876838-m03" in "kube-system" namespace to be "Ready" ...
	E0819 20:46:04.244050 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838-m03" hosting pod "kube-scheduler-ha-876838-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-876838-m03": nodes "ha-876838-m03" not found
	I0819 20:46:04.244059 1071714 pod_ready.go:39] duration metric: took 6.411644691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:46:04.244131 1071714 api_server.go:52] waiting for apiserver process to appear ...
	I0819 20:46:04.244224 1071714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:46:04.261750 1071714 api_server.go:72] duration metric: took 28.406206503s to wait for apiserver process to appear ...
	I0819 20:46:04.261821 1071714 api_server.go:88] waiting for apiserver healthz status ...
	I0819 20:46:04.261856 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:04.270983 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:04.271055 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:04.762634 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:04.770188 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:04.770234 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:05.262810 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:05.272077 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:05.272121 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:05.762952 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:05.770561 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:05.770592 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:06.261966 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:06.270468 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:06.270499 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:06.762013 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:06.769822 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:06.769851 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:07.261984 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:07.270566 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:07.270598 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:07.762276 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:07.770187 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:07.770220 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:08.262582 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:08.274149 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:08.274184 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:08.762777 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:08.770529 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:08.770567 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:09.262106 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:09.270075 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:09.270111 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:09.762778 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:09.770599 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:09.770630 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:10.261912 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:10.270559 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:10.270605 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:10.762510 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:10.772029 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:10.772056 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:11.262447 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:11.270033 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:11.270060 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:11.762615 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:11.883801 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:11.883841 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:12.261998 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:12.278113 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:12.278190 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:12.762749 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:12.778496 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:12.778571 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:13.261946 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:13.275424 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:13.275455 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:13.762728 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:13.778558 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:13.778588 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:14.262182 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:14.272017 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:14.272053 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:14.762247 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:14.769856 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:14.769892 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:15.262414 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:15.270118 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:15.270146 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:15.761986 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:15.769619 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:15.769651 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:16.262100 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:16.270011 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:16.270045 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:16.762644 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:16.770299 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:16.770326 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:17.261970 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:17.270749 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:17.270779 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:17.762269 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:17.772546 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:17.772598 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:18.261986 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:18.270032 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:18.270064 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:18.762670 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:18.772255 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:18.772286 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:19.262781 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:19.272101 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:19.272129 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:19.762774 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:19.772800 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:19.772842 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:20.262362 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:20.270160 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:20.270191 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:20.762759 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:20.777166 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:20.777193 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:21.262754 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:21.270415 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:21.270443 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:21.762741 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:21.777132 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:21.777164 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:22.262759 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:22.271054 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:22.271085 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:22.762670 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:22.770292 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:22.770321 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:23.262429 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:23.270519 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:23.270547 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:23.762756 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:23.785637 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:23.785668 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:24.262177 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:24.271342 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:24.271372 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:24.762706 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:24.770588 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:24.770618 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:25.262124 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:25.271697 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:25.271727 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:25.762413 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:25.770096 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:25.770133 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:26.262797 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:26.270628 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:26.270659 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:26.762321 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:26.770408 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:26.770436 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:27.262537 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:27.271646 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:27.271674 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:27.762045 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:27.769487 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:27.769517 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:28.262811 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:28.270676 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:28.270710 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:28.762007 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:28.770567 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:28.770601 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:29.262051 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:29.270141 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:29.270166 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:29.762763 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:29.770566 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:29.770595 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:30.262825 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:30.273236 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:30.273272 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:30.762764 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:30.771259 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:30.771290 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:31.262946 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:31.271011 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:31.271041 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:31.762538 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:31.771524 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:31.771554 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:32.262182 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:32.270349 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:32.270375 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:32.762739 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:32.771578 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:32.771616 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:33.262477 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:33.270915 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:33.270947 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:33.762527 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:33.770375 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:33.770405 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:34.262519 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:34.272036 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:34.272074 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:34.762745 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:34.770381 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:34.770417 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:35.261912 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:35.269463 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:35.269490 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:35.761969 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:35.769574 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:35.769627 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:36.262048 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:46:36.262219 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:46:36.301698 1071714 cri.go:89] found id: "d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:36.301721 1071714 cri.go:89] found id: "a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:36.301726 1071714 cri.go:89] found id: ""
	I0819 20:46:36.301734 1071714 logs.go:276] 2 containers: [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270]
	I0819 20:46:36.301795 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.305501 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.308961 1071714 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:46:36.309041 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:46:36.350705 1071714 cri.go:89] found id: "adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:36.350732 1071714 cri.go:89] found id: "381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:36.350738 1071714 cri.go:89] found id: ""
	I0819 20:46:36.350745 1071714 logs.go:276] 2 containers: [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38]
	I0819 20:46:36.350811 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.354550 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.357971 1071714 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:46:36.358064 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:46:36.395770 1071714 cri.go:89] found id: ""
	I0819 20:46:36.395795 1071714 logs.go:276] 0 containers: []
	W0819 20:46:36.395804 1071714 logs.go:278] No container was found matching "coredns"
	I0819 20:46:36.395811 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:46:36.395898 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:46:36.441194 1071714 cri.go:89] found id: "20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:36.441216 1071714 cri.go:89] found id: "be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:36.441221 1071714 cri.go:89] found id: ""
	I0819 20:46:36.441228 1071714 logs.go:276] 2 containers: [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e]
	I0819 20:46:36.441307 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.445057 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.448378 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:46:36.448449 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:46:36.485765 1071714 cri.go:89] found id: "ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:36.485788 1071714 cri.go:89] found id: ""
	I0819 20:46:36.485796 1071714 logs.go:276] 1 containers: [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25]
	I0819 20:46:36.485866 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.489327 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:46:36.489398 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:46:36.526480 1071714 cri.go:89] found id: "50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:36.526504 1071714 cri.go:89] found id: "3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:36.526509 1071714 cri.go:89] found id: ""
	I0819 20:46:36.526516 1071714 logs.go:276] 2 containers: [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd]
	I0819 20:46:36.526576 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.530015 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.533292 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:46:36.533366 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:46:36.571556 1071714 cri.go:89] found id: "cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:36.571583 1071714 cri.go:89] found id: ""
	I0819 20:46:36.571592 1071714 logs.go:276] 1 containers: [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b]
	I0819 20:46:36.571666 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:36.575181 1071714 logs.go:123] Gathering logs for kubelet ...
	I0819 20:46:36.575205 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:46:36.643000 1071714 logs.go:123] Gathering logs for dmesg ...
	I0819 20:46:36.643036 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:46:36.661153 1071714 logs.go:123] Gathering logs for kube-apiserver [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a] ...
	I0819 20:46:36.661186 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:36.731783 1071714 logs.go:123] Gathering logs for kube-controller-manager [3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd] ...
	I0819 20:46:36.731856 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:36.794976 1071714 logs.go:123] Gathering logs for container status ...
	I0819 20:46:36.795006 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:46:36.865308 1071714 logs.go:123] Gathering logs for kube-controller-manager [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be] ...
	I0819 20:46:36.865339 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:36.959418 1071714 logs.go:123] Gathering logs for kindnet [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b] ...
	I0819 20:46:36.959461 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:37.017398 1071714 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:46:37.017436 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:46:37.324244 1071714 logs.go:123] Gathering logs for kube-apiserver [a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270] ...
	I0819 20:46:37.324281 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:37.363221 1071714 logs.go:123] Gathering logs for etcd [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1] ...
	I0819 20:46:37.363250 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:37.413312 1071714 logs.go:123] Gathering logs for etcd [381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38] ...
	I0819 20:46:37.413345 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:37.475226 1071714 logs.go:123] Gathering logs for kube-scheduler [be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e] ...
	I0819 20:46:37.475261 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:37.517337 1071714 logs.go:123] Gathering logs for kube-proxy [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25] ...
	I0819 20:46:37.517369 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:37.558472 1071714 logs.go:123] Gathering logs for kube-scheduler [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248] ...
	I0819 20:46:37.558505 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:37.598843 1071714 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:46:37.598873 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:46:40.170636 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:41.548727 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 20:46:41.548753 1071714 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 20:46:41.548782 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:46:41.548848 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:46:41.616971 1071714 cri.go:89] found id: "d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:41.616991 1071714 cri.go:89] found id: "a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:41.616995 1071714 cri.go:89] found id: ""
	I0819 20:46:41.617002 1071714 logs.go:276] 2 containers: [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270]
	I0819 20:46:41.617059 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.626305 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.631259 1071714 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:46:41.631326 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:46:41.703875 1071714 cri.go:89] found id: "adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:41.703895 1071714 cri.go:89] found id: "381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:41.703900 1071714 cri.go:89] found id: ""
	I0819 20:46:41.703907 1071714 logs.go:276] 2 containers: [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38]
	I0819 20:46:41.703963 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.708841 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.716321 1071714 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:46:41.716396 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:46:41.780219 1071714 cri.go:89] found id: ""
	I0819 20:46:41.780240 1071714 logs.go:276] 0 containers: []
	W0819 20:46:41.780249 1071714 logs.go:278] No container was found matching "coredns"
	I0819 20:46:41.780255 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:46:41.780323 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:46:41.841854 1071714 cri.go:89] found id: "20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:41.841877 1071714 cri.go:89] found id: "be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:41.841882 1071714 cri.go:89] found id: ""
	I0819 20:46:41.841888 1071714 logs.go:276] 2 containers: [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e]
	I0819 20:46:41.841943 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.846159 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.850266 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:46:41.850335 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:46:41.913123 1071714 cri.go:89] found id: "ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:41.913144 1071714 cri.go:89] found id: ""
	I0819 20:46:41.913152 1071714 logs.go:276] 1 containers: [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25]
	I0819 20:46:41.913212 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.920729 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:46:41.920796 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:46:41.979532 1071714 cri.go:89] found id: "50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:41.979551 1071714 cri.go:89] found id: "3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:41.979556 1071714 cri.go:89] found id: ""
	I0819 20:46:41.979563 1071714 logs.go:276] 2 containers: [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd]
	I0819 20:46:41.979617 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.984664 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:41.988328 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:46:41.988448 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:46:42.041264 1071714 cri.go:89] found id: "cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:42.041284 1071714 cri.go:89] found id: ""
	I0819 20:46:42.041292 1071714 logs.go:276] 1 containers: [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b]
	I0819 20:46:42.041353 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:42.045399 1071714 logs.go:123] Gathering logs for kube-apiserver [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a] ...
	I0819 20:46:42.045465 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:42.104219 1071714 logs.go:123] Gathering logs for kube-controller-manager [3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd] ...
	I0819 20:46:42.104253 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:42.161198 1071714 logs.go:123] Gathering logs for container status ...
	I0819 20:46:42.161228 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:46:42.245661 1071714 logs.go:123] Gathering logs for etcd [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1] ...
	I0819 20:46:42.245750 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:42.342169 1071714 logs.go:123] Gathering logs for etcd [381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38] ...
	I0819 20:46:42.342394 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:42.447308 1071714 logs.go:123] Gathering logs for kube-proxy [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25] ...
	I0819 20:46:42.447343 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:42.585452 1071714 logs.go:123] Gathering logs for dmesg ...
	I0819 20:46:42.585488 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:46:42.620597 1071714 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:46:42.620630 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:46:42.946359 1071714 logs.go:123] Gathering logs for kube-apiserver [a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270] ...
	I0819 20:46:42.946399 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:43.005759 1071714 logs.go:123] Gathering logs for kube-scheduler [be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e] ...
	I0819 20:46:43.005792 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:43.060520 1071714 logs.go:123] Gathering logs for kube-controller-manager [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be] ...
	I0819 20:46:43.060590 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:43.122764 1071714 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:46:43.122847 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:46:43.201087 1071714 logs.go:123] Gathering logs for kubelet ...
	I0819 20:46:43.201176 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:46:43.278417 1071714 logs.go:123] Gathering logs for kube-scheduler [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248] ...
	I0819 20:46:43.278466 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:43.324531 1071714 logs.go:123] Gathering logs for kindnet [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b] ...
	I0819 20:46:43.324562 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:45.884796 1071714 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 20:46:45.894059 1071714 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 20:46:45.894186 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0819 20:46:45.894206 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:45.894219 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:45.894238 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:45.909089 1071714 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0819 20:46:45.909254 1071714 api_server.go:141] control plane version: v1.31.0
	I0819 20:46:45.909293 1071714 api_server.go:131] duration metric: took 41.6474437s to wait for apiserver health ...
	I0819 20:46:45.909311 1071714 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 20:46:45.909342 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 20:46:45.909436 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 20:46:45.949933 1071714 cri.go:89] found id: "d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:45.949954 1071714 cri.go:89] found id: "a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:45.949959 1071714 cri.go:89] found id: ""
	I0819 20:46:45.949966 1071714 logs.go:276] 2 containers: [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270]
	I0819 20:46:45.950024 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:45.954689 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:45.958376 1071714 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 20:46:45.958453 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 20:46:46.016516 1071714 cri.go:89] found id: "adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:46.016543 1071714 cri.go:89] found id: "381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:46.016548 1071714 cri.go:89] found id: ""
	I0819 20:46:46.016556 1071714 logs.go:276] 2 containers: [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38]
	I0819 20:46:46.016620 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.020946 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.025071 1071714 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 20:46:46.025148 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 20:46:46.066036 1071714 cri.go:89] found id: ""
	I0819 20:46:46.066115 1071714 logs.go:276] 0 containers: []
	W0819 20:46:46.066139 1071714 logs.go:278] No container was found matching "coredns"
	I0819 20:46:46.066180 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 20:46:46.066313 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 20:46:46.115573 1071714 cri.go:89] found id: "20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:46.115595 1071714 cri.go:89] found id: "be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:46.115605 1071714 cri.go:89] found id: ""
	I0819 20:46:46.115613 1071714 logs.go:276] 2 containers: [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e]
	I0819 20:46:46.115667 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.121092 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.124996 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 20:46:46.125093 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 20:46:46.164053 1071714 cri.go:89] found id: "ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:46.164078 1071714 cri.go:89] found id: ""
	I0819 20:46:46.164087 1071714 logs.go:276] 1 containers: [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25]
	I0819 20:46:46.164158 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.167813 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 20:46:46.167887 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 20:46:46.206987 1071714 cri.go:89] found id: "50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:46.207010 1071714 cri.go:89] found id: "3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:46.207016 1071714 cri.go:89] found id: ""
	I0819 20:46:46.207023 1071714 logs.go:276] 2 containers: [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd]
	I0819 20:46:46.207100 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.211138 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.214765 1071714 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 20:46:46.214890 1071714 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 20:46:46.255989 1071714 cri.go:89] found id: "cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:46.256011 1071714 cri.go:89] found id: ""
	I0819 20:46:46.256025 1071714 logs.go:276] 1 containers: [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b]
	I0819 20:46:46.256083 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:46.264839 1071714 logs.go:123] Gathering logs for kube-controller-manager [3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd] ...
	I0819 20:46:46.264869 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d07d7c1debd1b26ff207c4f9fc95a70c0aea092bbd5a8c504419cef492ab4dd"
	I0819 20:46:46.307525 1071714 logs.go:123] Gathering logs for CRI-O ...
	I0819 20:46:46.307553 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 20:46:46.376950 1071714 logs.go:123] Gathering logs for kube-controller-manager [50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be] ...
	I0819 20:46:46.376993 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50331fe4aa06ec664c84abe0e67ed361c8bf8e1c7a0d28b31729902d10cab6be"
	I0819 20:46:46.459750 1071714 logs.go:123] Gathering logs for dmesg ...
	I0819 20:46:46.459790 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 20:46:46.476633 1071714 logs.go:123] Gathering logs for describe nodes ...
	I0819 20:46:46.476664 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 20:46:46.728387 1071714 logs.go:123] Gathering logs for etcd [381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38] ...
	I0819 20:46:46.728425 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 381d982ec2c86f0a282c4cf543d191c3ba5e710d522cb4ba3a70dfadbbd9ae38"
	I0819 20:46:46.791567 1071714 logs.go:123] Gathering logs for kube-scheduler [be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e] ...
	I0819 20:46:46.791605 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be36764d53301119df7c5af8f9080d6c5b279a12540211ed85ba85a79fc7a09e"
	I0819 20:46:46.835618 1071714 logs.go:123] Gathering logs for kube-proxy [ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25] ...
	I0819 20:46:46.835647 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad8a1e4af013e7825d444b4d808bbe29ce7a4cc7c594fb604b02a974a49c7e25"
	I0819 20:46:46.881293 1071714 logs.go:123] Gathering logs for kindnet [cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b] ...
	I0819 20:46:46.881321 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd1743a0c29f7c3c40d4bdc5dc99eb01da4c19efe4370cf0c4e31dfc67beef6b"
	I0819 20:46:46.926246 1071714 logs.go:123] Gathering logs for container status ...
	I0819 20:46:46.926279 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 20:46:46.975722 1071714 logs.go:123] Gathering logs for kubelet ...
	I0819 20:46:46.975792 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 20:46:47.042967 1071714 logs.go:123] Gathering logs for kube-apiserver [a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270] ...
	I0819 20:46:47.043003 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7474fb528b0f232d2a4dd5bdecc4d6f265dd804b40b08c5344d8626f48c6270"
	I0819 20:46:47.082115 1071714 logs.go:123] Gathering logs for etcd [adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1] ...
	I0819 20:46:47.082146 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 adec2e728fbaa3f4b0d51339013498e2b8a33698314d4759a9ee3aebaaf9e1e1"
	I0819 20:46:47.132559 1071714 logs.go:123] Gathering logs for kube-scheduler [20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248] ...
	I0819 20:46:47.132595 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20805e1a15da9a9103e2102cd48852cb6963407c7689354525ff0f969ae6a248"
	I0819 20:46:47.176738 1071714 logs.go:123] Gathering logs for kube-apiserver [d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a] ...
	I0819 20:46:47.176767 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5cc1edbdd55929c146f6fb53ed2f57b6108bc8e82f722998e20ee439a73d17a"
	I0819 20:46:49.731789 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:46:49.731814 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:49.731824 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:49.731829 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:49.739341 1071714 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 20:46:49.747174 1071714 system_pods.go:59] 19 kube-system pods found
	I0819 20:46:49.747220 1071714 system_pods.go:61] "coredns-6f6b679f8f-d2bzw" [848a74e6-f43a-4d85-957d-d7b2c06865ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:46:49.747230 1071714 system_pods.go:61] "coredns-6f6b679f8f-m4zj2" [a60dc3be-1f56-4b17-a8b8-de298ab4df88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:46:49.747237 1071714 system_pods.go:61] "etcd-ha-876838" [5d598608-fb44-403d-956a-3fa2df1bc25c] Running
	I0819 20:46:49.747242 1071714 system_pods.go:61] "etcd-ha-876838-m02" [80b1ddb5-cc2d-4d0d-bf16-292ab9992f60] Running
	I0819 20:46:49.747246 1071714 system_pods.go:61] "kindnet-4vxdq" [d2402947-0186-4bc7-a141-8014b9b64055] Running
	I0819 20:46:49.747250 1071714 system_pods.go:61] "kindnet-ffzz7" [429ebc60-eabb-4088-b9ca-d0be6c732feb] Running
	I0819 20:46:49.747254 1071714 system_pods.go:61] "kindnet-tfw52" [2908d557-625c-4034-ae49-add736f511b7] Running
	I0819 20:46:49.747260 1071714 system_pods.go:61] "kube-apiserver-ha-876838" [b65684ff-3671-4d46-931b-a68b4853b33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 20:46:49.747265 1071714 system_pods.go:61] "kube-apiserver-ha-876838-m02" [4b3cf765-9ab1-4f6b-895d-fa6c3b0c6c95] Running
	I0819 20:46:49.747273 1071714 system_pods.go:61] "kube-controller-manager-ha-876838" [3b4093f4-8e1d-4b35-9d85-d67f10bd5a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 20:46:49.747278 1071714 system_pods.go:61] "kube-controller-manager-ha-876838-m02" [daaa1fa4-d0f8-4141-b2c9-97a389f653e5] Running
	I0819 20:46:49.747293 1071714 system_pods.go:61] "kube-proxy-d6lm2" [b53018bf-be3f-4562-bf3a-474bff6b3cca] Running
	I0819 20:46:49.747297 1071714 system_pods.go:61] "kube-proxy-lvqhn" [22e80319-ccb3-466b-b8c2-b42439e0d882] Running
	I0819 20:46:49.747301 1071714 system_pods.go:61] "kube-proxy-n6xdk" [55214fa2-528f-4749-8792-d58998630c21] Running
	I0819 20:46:49.747307 1071714 system_pods.go:61] "kube-scheduler-ha-876838" [f95c9665-ad03-406c-9a31-b5e2e8636924] Running
	I0819 20:46:49.747311 1071714 system_pods.go:61] "kube-scheduler-ha-876838-m02" [94831b74-ba6d-4473-8529-1b8cd841fba1] Running
	I0819 20:46:49.747316 1071714 system_pods.go:61] "kube-vip-ha-876838" [b37a35db-4d37-49cd-b872-de1dbf4b041d] Running
	I0819 20:46:49.747321 1071714 system_pods.go:61] "kube-vip-ha-876838-m02" [f9a24ef1-c187-4c65-bfb2-3fa1d86ca8e1] Running
	I0819 20:46:49.747326 1071714 system_pods.go:61] "storage-provisioner" [3f4389c8-6d78-454e-a280-5ab24fc5a02f] Running
	I0819 20:46:49.747340 1071714 system_pods.go:74] duration metric: took 3.838015955s to wait for pod list to return data ...
	I0819 20:46:49.747348 1071714 default_sa.go:34] waiting for default service account to be created ...
	I0819 20:46:49.747435 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0819 20:46:49.747443 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:49.747452 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:49.747457 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:49.750700 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:46:49.750932 1071714 default_sa.go:45] found service account: "default"
	I0819 20:46:49.750954 1071714 default_sa.go:55] duration metric: took 3.598857ms for default service account to be created ...
	I0819 20:46:49.750966 1071714 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 20:46:49.751026 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:46:49.751031 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:49.751039 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:49.751045 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:49.755637 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:46:49.762851 1071714 system_pods.go:86] 19 kube-system pods found
	I0819 20:46:49.762885 1071714 system_pods.go:89] "coredns-6f6b679f8f-d2bzw" [848a74e6-f43a-4d85-957d-d7b2c06865ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:46:49.762896 1071714 system_pods.go:89] "coredns-6f6b679f8f-m4zj2" [a60dc3be-1f56-4b17-a8b8-de298ab4df88] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 20:46:49.762904 1071714 system_pods.go:89] "etcd-ha-876838" [5d598608-fb44-403d-956a-3fa2df1bc25c] Running
	I0819 20:46:49.762910 1071714 system_pods.go:89] "etcd-ha-876838-m02" [80b1ddb5-cc2d-4d0d-bf16-292ab9992f60] Running
	I0819 20:46:49.762915 1071714 system_pods.go:89] "kindnet-4vxdq" [d2402947-0186-4bc7-a141-8014b9b64055] Running
	I0819 20:46:49.762921 1071714 system_pods.go:89] "kindnet-ffzz7" [429ebc60-eabb-4088-b9ca-d0be6c732feb] Running
	I0819 20:46:49.762926 1071714 system_pods.go:89] "kindnet-tfw52" [2908d557-625c-4034-ae49-add736f511b7] Running
	I0819 20:46:49.762933 1071714 system_pods.go:89] "kube-apiserver-ha-876838" [b65684ff-3671-4d46-931b-a68b4853b33c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 20:46:49.762946 1071714 system_pods.go:89] "kube-apiserver-ha-876838-m02" [4b3cf765-9ab1-4f6b-895d-fa6c3b0c6c95] Running
	I0819 20:46:49.762955 1071714 system_pods.go:89] "kube-controller-manager-ha-876838" [3b4093f4-8e1d-4b35-9d85-d67f10bd5a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 20:46:49.762960 1071714 system_pods.go:89] "kube-controller-manager-ha-876838-m02" [daaa1fa4-d0f8-4141-b2c9-97a389f653e5] Running
	I0819 20:46:49.762967 1071714 system_pods.go:89] "kube-proxy-d6lm2" [b53018bf-be3f-4562-bf3a-474bff6b3cca] Running
	I0819 20:46:49.762972 1071714 system_pods.go:89] "kube-proxy-lvqhn" [22e80319-ccb3-466b-b8c2-b42439e0d882] Running
	I0819 20:46:49.762979 1071714 system_pods.go:89] "kube-proxy-n6xdk" [55214fa2-528f-4749-8792-d58998630c21] Running
	I0819 20:46:49.762983 1071714 system_pods.go:89] "kube-scheduler-ha-876838" [f95c9665-ad03-406c-9a31-b5e2e8636924] Running
	I0819 20:46:49.762987 1071714 system_pods.go:89] "kube-scheduler-ha-876838-m02" [94831b74-ba6d-4473-8529-1b8cd841fba1] Running
	I0819 20:46:49.762999 1071714 system_pods.go:89] "kube-vip-ha-876838" [b37a35db-4d37-49cd-b872-de1dbf4b041d] Running
	I0819 20:46:49.763002 1071714 system_pods.go:89] "kube-vip-ha-876838-m02" [f9a24ef1-c187-4c65-bfb2-3fa1d86ca8e1] Running
	I0819 20:46:49.763005 1071714 system_pods.go:89] "storage-provisioner" [3f4389c8-6d78-454e-a280-5ab24fc5a02f] Running
	I0819 20:46:49.763014 1071714 system_pods.go:126] duration metric: took 12.04105ms to wait for k8s-apps to be running ...
	I0819 20:46:49.763027 1071714 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:46:49.763089 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:46:49.779334 1071714 system_svc.go:56] duration metric: took 16.297001ms WaitForService to wait for kubelet
	I0819 20:46:49.779365 1071714 kubeadm.go:582] duration metric: took 1m13.923825408s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:46:49.779388 1071714 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:46:49.779464 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0819 20:46:49.779475 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:49.779483 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:49.779487 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:49.782836 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:46:49.784007 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:46:49.784028 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:46:49.784039 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:46:49.784044 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:46:49.784048 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:46:49.784052 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:46:49.784057 1071714 node_conditions.go:105] duration metric: took 4.663136ms to run NodePressure ...
	I0819 20:46:49.784069 1071714 start.go:241] waiting for startup goroutines ...
	I0819 20:46:49.784093 1071714 start.go:255] writing updated cluster config ...
	I0819 20:46:49.787178 1071714 out.go:201] 
	I0819 20:46:49.789946 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:46:49.790124 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	I0819 20:46:49.793024 1071714 out.go:177] * Starting "ha-876838-m04" worker node in "ha-876838" cluster
	I0819 20:46:49.796318 1071714 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:46:49.798995 1071714 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:46:49.801853 1071714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:46:49.801891 1071714 cache.go:56] Caching tarball of preloaded images
	I0819 20:46:49.801935 1071714 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:46:49.801999 1071714 preload.go:172] Found /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0819 20:46:49.802014 1071714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 20:46:49.802138 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	W0819 20:46:49.821089 1071714 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 20:46:49.821113 1071714 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:46:49.821180 1071714 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:46:49.821203 1071714 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:46:49.821208 1071714 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:46:49.821226 1071714 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:46:49.821232 1071714 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 20:46:49.822525 1071714 image.go:273] response: 
	I0819 20:46:49.944776 1071714 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 20:46:49.944821 1071714 cache.go:194] Successfully downloaded all kic artifacts
	I0819 20:46:49.944856 1071714 start.go:360] acquireMachinesLock for ha-876838-m04: {Name:mkba08d68d0db970dc47612cc8e766ab674bc266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 20:46:49.944934 1071714 start.go:364] duration metric: took 60.037µs to acquireMachinesLock for "ha-876838-m04"
	I0819 20:46:49.944960 1071714 start.go:96] Skipping create...Using existing machine configuration
	I0819 20:46:49.944966 1071714 fix.go:54] fixHost starting: m04
	I0819 20:46:49.945258 1071714 cli_runner.go:164] Run: docker container inspect ha-876838-m04 --format={{.State.Status}}
	I0819 20:46:49.963012 1071714 fix.go:112] recreateIfNeeded on ha-876838-m04: state=Stopped err=<nil>
	W0819 20:46:49.963049 1071714 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 20:46:49.967404 1071714 out.go:177] * Restarting existing docker container for "ha-876838-m04" ...
	I0819 20:46:49.970031 1071714 cli_runner.go:164] Run: docker start ha-876838-m04
	I0819 20:46:50.308901 1071714 cli_runner.go:164] Run: docker container inspect ha-876838-m04 --format={{.State.Status}}
	I0819 20:46:50.331881 1071714 kic.go:430] container "ha-876838-m04" state is running.
	I0819 20:46:50.332258 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m04
	I0819 20:46:50.360389 1071714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/config.json ...
	I0819 20:46:50.360643 1071714 machine.go:93] provisionDockerMachine start ...
	I0819 20:46:50.360718 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:50.383823 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:46:50.384082 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33968 <nil> <nil>}
	I0819 20:46:50.384098 1071714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 20:46:50.384781 1071714 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50800->127.0.0.1:33968: read: connection reset by peer
	I0819 20:46:53.533177 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838-m04
	
	I0819 20:46:53.533252 1071714 ubuntu.go:169] provisioning hostname "ha-876838-m04"
	I0819 20:46:53.533347 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:53.569378 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:46:53.569687 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33968 <nil> <nil>}
	I0819 20:46:53.569709 1071714 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-876838-m04 && echo "ha-876838-m04" | sudo tee /etc/hostname
	I0819 20:46:53.718727 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-876838-m04
	
	I0819 20:46:53.718865 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:53.743345 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:46:53.743597 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33968 <nil> <nil>}
	I0819 20:46:53.743619 1071714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-876838-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-876838-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-876838-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 20:46:53.893771 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 20:46:53.893799 1071714 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1006087/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1006087/.minikube}
	I0819 20:46:53.893822 1071714 ubuntu.go:177] setting up certificates
	I0819 20:46:53.893843 1071714 provision.go:84] configureAuth start
	I0819 20:46:53.893917 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m04
	I0819 20:46:53.915677 1071714 provision.go:143] copyHostCerts
	I0819 20:46:53.915724 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:46:53.915756 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem, removing ...
	I0819 20:46:53.915768 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem
	I0819 20:46:53.915851 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/cert.pem (1123 bytes)
	I0819 20:46:53.915938 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:46:53.915963 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem, removing ...
	I0819 20:46:53.915970 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem
	I0819 20:46:53.915999 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/key.pem (1675 bytes)
	I0819 20:46:53.916045 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:46:53.916066 1071714 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem, removing ...
	I0819 20:46:53.916073 1071714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem
	I0819 20:46:53.916098 1071714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.pem (1082 bytes)
	I0819 20:46:53.916151 1071714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem org=jenkins.ha-876838-m04 san=[127.0.0.1 192.168.49.5 ha-876838-m04 localhost minikube]
	I0819 20:46:54.572827 1071714 provision.go:177] copyRemoteCerts
	I0819 20:46:54.572898 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 20:46:54.573025 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:54.596780 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:46:54.700211 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 20:46:54.700276 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 20:46:54.728130 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 20:46:54.728219 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 20:46:54.756191 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 20:46:54.756252 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 20:46:54.784438 1071714 provision.go:87] duration metric: took 890.576807ms to configureAuth
	I0819 20:46:54.784473 1071714 ubuntu.go:193] setting minikube options for container-runtime
	I0819 20:46:54.784713 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:46:54.784814 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:54.814452 1071714 main.go:141] libmachine: Using SSH client type: native
	I0819 20:46:54.814686 1071714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33968 <nil> <nil>}
	I0819 20:46:54.814700 1071714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 20:46:55.132606 1071714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 20:46:55.132638 1071714 machine.go:96] duration metric: took 4.771970115s to provisionDockerMachine
	I0819 20:46:55.132655 1071714 start.go:293] postStartSetup for "ha-876838-m04" (driver="docker")
	I0819 20:46:55.132675 1071714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 20:46:55.132788 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 20:46:55.132857 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:55.158934 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:46:55.263784 1071714 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 20:46:55.268011 1071714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 20:46:55.268049 1071714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 20:46:55.268063 1071714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 20:46:55.268071 1071714 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 20:46:55.268082 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/addons for local assets ...
	I0819 20:46:55.268147 1071714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1006087/.minikube/files for local assets ...
	I0819 20:46:55.268248 1071714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> 10114622.pem in /etc/ssl/certs
	I0819 20:46:55.268259 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /etc/ssl/certs/10114622.pem
	I0819 20:46:55.268372 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 20:46:55.282162 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:46:55.312924 1071714 start.go:296] duration metric: took 180.250482ms for postStartSetup
	I0819 20:46:55.313016 1071714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:46:55.313061 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:55.331686 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:46:55.424159 1071714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 20:46:55.430489 1071714 fix.go:56] duration metric: took 5.485513654s for fixHost
	I0819 20:46:55.430511 1071714 start.go:83] releasing machines lock for "ha-876838-m04", held for 5.48556386s
	I0819 20:46:55.430584 1071714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m04
	I0819 20:46:55.452960 1071714 out.go:177] * Found network options:
	I0819 20:46:55.455632 1071714 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0819 20:46:55.458439 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 20:46:55.458484 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 20:46:55.458511 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 20:46:55.458522 1071714 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 20:46:55.458596 1071714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 20:46:55.458646 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:55.458922 1071714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 20:46:55.458990 1071714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:46:55.485745 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:46:55.499303 1071714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33968 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:46:55.780235 1071714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 20:46:55.785131 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:46:55.796765 1071714 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 20:46:55.796901 1071714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 20:46:55.806884 1071714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 20:46:55.806910 1071714 start.go:495] detecting cgroup driver to use...
	I0819 20:46:55.806961 1071714 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 20:46:55.807026 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 20:46:55.819687 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 20:46:55.834600 1071714 docker.go:217] disabling cri-docker service (if available) ...
	I0819 20:46:55.834668 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 20:46:55.852283 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 20:46:55.867362 1071714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 20:46:56.013485 1071714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 20:46:56.141478 1071714 docker.go:233] disabling docker service ...
	I0819 20:46:56.141620 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 20:46:56.159657 1071714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 20:46:56.174127 1071714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 20:46:56.278841 1071714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 20:46:56.380091 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 20:46:56.395642 1071714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 20:46:56.437064 1071714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 20:46:56.437143 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.461240 1071714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 20:46:56.461323 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.479054 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.490288 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.505214 1071714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 20:46:56.515522 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.530642 1071714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.544376 1071714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 20:46:56.557096 1071714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 20:46:56.566688 1071714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 20:46:56.575568 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:46:56.665847 1071714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 20:46:56.799943 1071714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 20:46:56.800067 1071714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 20:46:56.804156 1071714 start.go:563] Will wait 60s for crictl version
	I0819 20:46:56.804259 1071714 ssh_runner.go:195] Run: which crictl
	I0819 20:46:56.807761 1071714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 20:46:56.853694 1071714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 20:46:56.853875 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:46:56.915026 1071714 ssh_runner.go:195] Run: crio --version
	I0819 20:46:56.957805 1071714 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 20:46:56.960546 1071714 out.go:177]   - env NO_PROXY=192.168.49.2
	I0819 20:46:56.963106 1071714 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0819 20:46:56.965743 1071714 cli_runner.go:164] Run: docker network inspect ha-876838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 20:46:56.982646 1071714 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 20:46:56.987064 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:46:57.004869 1071714 mustload.go:65] Loading cluster: ha-876838
	I0819 20:46:57.005162 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:46:57.005455 1071714 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:46:57.026681 1071714 host.go:66] Checking if "ha-876838" exists ...
	I0819 20:46:57.026993 1071714 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838 for IP: 192.168.49.5
	I0819 20:46:57.027009 1071714 certs.go:194] generating shared ca certs ...
	I0819 20:46:57.027027 1071714 certs.go:226] acquiring lock for ca certs: {Name:mka0619a4a0da3f790025b70d844d99358d748e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 20:46:57.027136 1071714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key
	I0819 20:46:57.027188 1071714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key
	I0819 20:46:57.027204 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 20:46:57.027220 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 20:46:57.027237 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 20:46:57.027249 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 20:46:57.027319 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem (1338 bytes)
	W0819 20:46:57.027354 1071714 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462_empty.pem, impossibly tiny 0 bytes
	I0819 20:46:57.027367 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 20:46:57.027396 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/ca.pem (1082 bytes)
	I0819 20:46:57.027423 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/cert.pem (1123 bytes)
	I0819 20:46:57.027451 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/key.pem (1675 bytes)
	I0819 20:46:57.027501 1071714 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem (1708 bytes)
	I0819 20:46:57.027537 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem -> /usr/share/ca-certificates/10114622.pem
	I0819 20:46:57.027557 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:46:57.027578 1071714 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem -> /usr/share/ca-certificates/1011462.pem
	I0819 20:46:57.027606 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 20:46:57.055051 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 20:46:57.087720 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 20:46:57.120728 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 20:46:57.147302 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/ssl/certs/10114622.pem --> /usr/share/ca-certificates/10114622.pem (1708 bytes)
	I0819 20:46:57.174586 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 20:46:57.200334 1071714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1006087/.minikube/certs/1011462.pem --> /usr/share/ca-certificates/1011462.pem (1338 bytes)
	I0819 20:46:57.231544 1071714 ssh_runner.go:195] Run: openssl version
	I0819 20:46:57.237107 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10114622.pem && ln -fs /usr/share/ca-certificates/10114622.pem /etc/ssl/certs/10114622.pem"
	I0819 20:46:57.247434 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10114622.pem
	I0819 20:46:57.251377 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 20:32 /usr/share/ca-certificates/10114622.pem
	I0819 20:46:57.251448 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10114622.pem
	I0819 20:46:57.258602 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10114622.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 20:46:57.268365 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 20:46:57.280910 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:46:57.285082 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 20:21 /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:46:57.285167 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 20:46:57.292628 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 20:46:57.302771 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1011462.pem && ln -fs /usr/share/ca-certificates/1011462.pem /etc/ssl/certs/1011462.pem"
	I0819 20:46:57.312593 1071714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1011462.pem
	I0819 20:46:57.316265 1071714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 20:32 /usr/share/ca-certificates/1011462.pem
	I0819 20:46:57.316352 1071714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1011462.pem
	I0819 20:46:57.323392 1071714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1011462.pem /etc/ssl/certs/51391683.0"
	I0819 20:46:57.333339 1071714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 20:46:57.336732 1071714 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 20:46:57.336776 1071714 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0819 20:46:57.336861 1071714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-876838-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-876838 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 20:46:57.336930 1071714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 20:46:57.346761 1071714 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 20:46:57.346881 1071714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0819 20:46:57.356795 1071714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 20:46:57.376022 1071714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 20:46:57.398586 1071714 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 20:46:57.402074 1071714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 20:46:57.414148 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:46:57.504889 1071714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:46:57.518023 1071714 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0819 20:46:57.518518 1071714 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:46:57.521555 1071714 out.go:177] * Verifying Kubernetes components...
	I0819 20:46:57.524214 1071714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 20:46:57.639856 1071714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 20:46:57.655813 1071714 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:46:57.656095 1071714 kapi.go:59] client config for ha-876838: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/ha-876838/client.key", CAFile:"/home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19cb7b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 20:46:57.656170 1071714 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0819 20:46:57.656419 1071714 node_ready.go:35] waiting up to 6m0s for node "ha-876838-m04" to be "Ready" ...
	I0819 20:46:57.656498 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:57.656509 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:57.656518 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:57.656524 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:57.659967 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:46:58.157406 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:58.157431 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:58.157440 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:58.157445 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:58.160325 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:58.656693 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:58.656719 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:58.656728 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:58.656731 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:58.659693 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:59.157323 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:59.157403 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:59.157420 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:59.157425 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:59.160406 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:59.657586 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:46:59.657643 1071714 round_trippers.go:469] Request Headers:
	I0819 20:46:59.657653 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:46:59.657658 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:46:59.660567 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:46:59.661377 1071714 node_ready.go:53] node "ha-876838-m04" has status "Ready":"Unknown"
	I0819 20:47:00.157549 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:00.157578 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:00.157587 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:00.157640 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:00.162661 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:47:00.657181 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:00.657221 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:00.657231 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:00.657235 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:00.660211 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:01.156989 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:01.157011 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:01.157019 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:01.157025 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:01.159972 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:01.657080 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:01.657104 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:01.657113 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:01.657119 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:01.659955 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:02.157414 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:02.157438 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:02.157448 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:02.157452 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:02.160336 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:02.160988 1071714 node_ready.go:53] node "ha-876838-m04" has status "Ready":"Unknown"
	I0819 20:47:02.656636 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:02.656660 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:02.656670 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:02.656675 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:02.659757 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:03.157314 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:03.157340 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:03.157351 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:03.157358 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:03.160791 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:03.657434 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:03.657456 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:03.657466 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:03.657471 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:03.660282 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:04.156682 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:04.156704 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.156713 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.156719 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.159709 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:04.160348 1071714 node_ready.go:49] node "ha-876838-m04" has status "Ready":"True"
	I0819 20:47:04.160372 1071714 node_ready.go:38] duration metric: took 6.503935832s for node "ha-876838-m04" to be "Ready" ...
	I0819 20:47:04.160383 1071714 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:47:04.160455 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 20:47:04.160466 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.160475 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.160479 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.165709 1071714 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 20:47:04.173234 1071714 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:04.173357 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:04.173378 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.173388 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.173393 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.176442 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:04.177285 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:04.177304 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.177314 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.177318 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.180448 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:04.673932 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:04.673956 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.673966 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.673971 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.678084 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:47:04.679288 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:04.679311 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:04.679321 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:04.679326 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:04.683950 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:47:05.174055 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:05.174080 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:05.174090 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:05.174095 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:05.177055 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:05.178059 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:05.178081 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:05.178090 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:05.178095 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:05.180757 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:05.674365 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:05.674392 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:05.674401 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:05.674405 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:05.677414 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:05.678158 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:05.678177 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:05.678186 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:05.678191 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:05.681313 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:06.173666 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:06.173690 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:06.173699 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:06.173703 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:06.176860 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:06.177798 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:06.177821 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:06.177831 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:06.177838 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:06.180539 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:06.181337 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:06.674156 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:06.674178 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:06.674188 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:06.674193 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:06.677436 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:06.678498 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:06.678524 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:06.678542 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:06.678586 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:06.681547 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:07.174422 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:07.174446 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:07.174456 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:07.174460 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:07.178434 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:07.179371 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:07.179391 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:07.179400 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:07.179406 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:07.181849 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:07.674112 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:07.674136 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:07.674145 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:07.674149 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:07.677002 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:07.677941 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:07.677962 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:07.677971 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:07.677976 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:07.680518 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:08.174346 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:08.174371 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:08.174379 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:08.174384 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:08.177402 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:08.178198 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:08.178219 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:08.178237 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:08.178244 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:08.181005 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:08.181877 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:08.674134 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:08.674159 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:08.674170 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:08.674178 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:08.676758 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:08.677405 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:08.677415 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:08.677423 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:08.677428 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:08.680508 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:09.173396 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:09.173419 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:09.173429 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:09.173434 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:09.176606 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:09.177480 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:09.177501 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:09.177511 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:09.177517 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:09.180250 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:09.673537 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:09.673559 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:09.673568 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:09.673573 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:09.676923 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:09.677834 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:09.677855 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:09.677864 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:09.677868 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:09.681231 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:10.173994 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:10.174018 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:10.174027 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:10.174032 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:10.176859 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:10.177588 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:10.177644 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:10.177654 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:10.177659 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:10.180736 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:10.674268 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:10.674291 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:10.674301 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:10.674305 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:10.677201 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:10.678184 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:10.678207 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:10.678217 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:10.678230 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:10.680749 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:10.681331 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:11.174350 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:11.174372 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:11.174383 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:11.174388 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:11.177438 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:11.178319 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:11.178340 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:11.178350 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:11.178356 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:11.180788 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:11.674142 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:11.674167 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:11.674178 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:11.674184 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:11.676948 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:11.677948 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:11.677970 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:11.677980 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:11.677984 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:11.680658 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:12.173784 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:12.173813 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:12.173824 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:12.173828 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:12.176991 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:12.177959 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:12.177982 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:12.177992 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:12.177997 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:12.181147 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:12.673491 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:12.673520 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:12.673530 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:12.673534 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:12.676594 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:12.677396 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:12.677412 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:12.677421 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:12.677425 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:12.681018 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:12.682069 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:13.174287 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:13.174308 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:13.174318 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:13.174322 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:13.177332 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:13.178026 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:13.178036 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:13.178045 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:13.178049 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:13.180574 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:13.673492 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:13.673517 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:13.673527 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:13.673532 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:13.676435 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:13.677364 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:13.677383 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:13.677393 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:13.677397 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:13.680234 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:14.175177 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:14.175200 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:14.175210 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:14.175220 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:14.177980 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:14.178810 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:14.178830 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:14.178840 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:14.178844 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:14.181519 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:14.674019 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:14.674045 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:14.674055 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:14.674061 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:14.676907 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:14.678047 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:14.678064 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:14.678075 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:14.678081 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:14.682380 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:47:14.683052 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:15.173534 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:15.173570 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:15.173583 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:15.173608 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:15.178212 1071714 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 20:47:15.180585 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:15.180607 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:15.180617 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:15.180621 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:15.184012 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:15.673555 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:15.673580 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:15.673611 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:15.673617 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:15.676629 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:15.677428 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:15.677446 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:15.677456 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:15.677463 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:15.680372 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:16.173488 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:16.173516 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:16.173526 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:16.173530 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:16.176447 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:16.177256 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:16.177309 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:16.177322 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:16.177329 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:16.179828 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:16.674060 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:16.674086 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:16.674093 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:16.674097 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:16.677015 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:16.677887 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:16.677906 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:16.677916 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:16.677921 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:16.680322 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:17.173775 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:17.173795 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:17.173805 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:17.173809 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:17.176717 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:17.177761 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:17.177781 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:17.177791 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:17.177796 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:17.180528 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:17.181231 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:17.674041 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:17.674067 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:17.674078 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:17.674085 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:17.677051 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:17.678009 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:17.678036 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:17.678045 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:17.678053 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:17.681070 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:18.173582 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:18.173626 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:18.173635 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:18.173639 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:18.176696 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:18.177527 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:18.177550 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:18.177560 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:18.177564 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:18.180670 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:18.674083 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:18.674109 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:18.674119 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:18.674123 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:18.676961 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:18.677829 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:18.677842 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:18.677851 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:18.677855 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:18.681309 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:19.174255 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:19.174279 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:19.174289 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:19.174294 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:19.177201 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:19.178066 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:19.178089 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:19.178099 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:19.178106 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:19.180800 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:19.181578 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:19.673532 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:19.673557 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:19.673567 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:19.673572 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:19.676757 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:19.677522 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:19.677546 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:19.677556 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:19.677561 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:19.680380 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:20.173456 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:20.173482 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:20.173492 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:20.173498 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:20.176653 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:20.177408 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:20.177428 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:20.177438 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:20.177442 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:20.180915 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:20.673501 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:20.673533 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:20.673543 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:20.673546 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:20.677079 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:20.677961 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:20.677981 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:20.677991 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:20.677996 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:20.681332 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:21.173786 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:21.173809 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:21.173819 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:21.173824 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:21.176729 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:21.177476 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:21.177498 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:21.177509 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:21.177513 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:21.180218 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:21.674469 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:21.674489 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:21.674498 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:21.674503 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:21.677230 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:21.677940 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:21.677954 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:21.677963 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:21.677969 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:21.681067 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:21.681587 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:22.174048 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:22.174074 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:22.174083 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:22.174087 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:22.177261 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:22.178131 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:22.178174 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:22.178186 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:22.178196 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:22.181646 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:22.674006 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:22.674031 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:22.674041 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:22.674047 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:22.677108 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:22.678052 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:22.678075 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:22.678085 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:22.678091 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:22.681386 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:23.174027 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:23.174053 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:23.174064 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:23.174068 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:23.177410 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:23.178426 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:23.178450 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:23.178464 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:23.178470 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:23.181462 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:23.673483 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:23.673509 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:23.673529 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:23.673532 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:23.676473 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:23.677544 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:23.677568 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:23.677578 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:23.677584 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:23.681052 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:23.681766 1071714 pod_ready.go:103] pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace has status "Ready":"False"
	I0819 20:47:24.173564 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-d2bzw
	I0819 20:47:24.173620 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.173631 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.173635 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.193871 1071714 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0819 20:47:24.195109 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.195178 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.195202 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.195224 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.201327 1071714 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 20:47:24.204214 1071714 pod_ready.go:98] node "ha-876838" hosting pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.204240 1071714 pod_ready.go:82] duration metric: took 20.030975271s for pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:24.204250 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "coredns-6f6b679f8f-d2bzw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.204257 1071714 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.204323 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m4zj2
	I0819 20:47:24.204328 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.204336 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.204340 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.207559 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.208823 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.208893 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.208916 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.208938 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.215151 1071714 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0819 20:47:24.215817 1071714 pod_ready.go:98] node "ha-876838" hosting pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.215837 1071714 pod_ready.go:82] duration metric: took 11.573698ms for pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:24.215848 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "coredns-6f6b679f8f-m4zj2" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.215855 1071714 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.215920 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-876838
	I0819 20:47:24.215924 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.215932 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.215937 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.219783 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.220458 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.220507 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.220530 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.220554 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.223848 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.224517 1071714 pod_ready.go:98] node "ha-876838" hosting pod "etcd-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.224573 1071714 pod_ready.go:82] duration metric: took 8.710441ms for pod "etcd-ha-876838" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:24.224598 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "etcd-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.224619 1071714 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.224734 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-876838-m02
	I0819 20:47:24.224759 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.224780 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.224803 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.232524 1071714 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 20:47:24.233368 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:24.233445 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.233476 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.233497 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.243656 1071714 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 20:47:24.245385 1071714 pod_ready.go:93] pod "etcd-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:24.245458 1071714 pod_ready.go:82] duration metric: took 20.799654ms for pod "etcd-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.245496 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.245620 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838
	I0819 20:47:24.245650 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.245672 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.245693 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.257948 1071714 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0819 20:47:24.258819 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.258878 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.258901 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.258922 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.267894 1071714 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0819 20:47:24.269638 1071714 pod_ready.go:98] node "ha-876838" hosting pod "kube-apiserver-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.269712 1071714 pod_ready.go:82] duration metric: took 24.181914ms for pod "kube-apiserver-ha-876838" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:24.269737 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "kube-apiserver-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.269770 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.374064 1071714 request.go:632] Waited for 104.204474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m02
	I0819 20:47:24.374224 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-876838-m02
	I0819 20:47:24.374239 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.374247 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.374277 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.377448 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.573855 1071714 request.go:632] Waited for 195.313867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:24.573943 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:24.573953 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.573975 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.573985 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.576876 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:24.578174 1071714 pod_ready.go:93] pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:24.578200 1071714 pod_ready.go:82] duration metric: took 308.405076ms for pod "kube-apiserver-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.578220 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:24.774565 1071714 request.go:632] Waited for 196.277462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838
	I0819 20:47:24.774688 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838
	I0819 20:47:24.774727 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.774755 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.774777 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.778594 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.973735 1071714 request.go:632] Waited for 193.25995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.973803 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:24.973813 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:24.973822 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:24.973826 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:24.976958 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:24.977684 1071714 pod_ready.go:98] node "ha-876838" hosting pod "kube-controller-manager-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.977707 1071714 pod_ready.go:82] duration metric: took 399.478137ms for pod "kube-controller-manager-ha-876838" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:24.977730 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "kube-controller-manager-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:24.977744 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:25.174515 1071714 request.go:632] Waited for 196.688831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m02
	I0819 20:47:25.174598 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-876838-m02
	I0819 20:47:25.174611 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:25.174621 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:25.174632 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:25.177719 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:25.373745 1071714 request.go:632] Waited for 194.987289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:25.373842 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:25.373866 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:25.373876 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:25.373881 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:25.376946 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:25.377729 1071714 pod_ready.go:93] pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:25.377756 1071714 pod_ready.go:82] duration metric: took 400.003147ms for pod "kube-controller-manager-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:25.377769 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d6lm2" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:25.574333 1071714 request.go:632] Waited for 196.448644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6lm2
	I0819 20:47:25.574422 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d6lm2
	I0819 20:47:25.574435 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:25.574444 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:25.574449 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:25.577364 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:25.774493 1071714 request.go:632] Waited for 196.385982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:25.774591 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:25.774606 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:25.774620 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:25.774635 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:25.777472 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:25.778083 1071714 pod_ready.go:93] pod "kube-proxy-d6lm2" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:25.778103 1071714 pod_ready.go:82] duration metric: took 400.326878ms for pod "kube-proxy-d6lm2" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:25.778129 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lvqhn" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:25.973623 1071714 request.go:632] Waited for 195.393588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvqhn
	I0819 20:47:25.973734 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lvqhn
	I0819 20:47:25.973781 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:25.973805 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:25.973826 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:25.977040 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:26.173918 1071714 request.go:632] Waited for 196.246299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:26.173991 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m04
	I0819 20:47:26.174001 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:26.174010 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:26.174013 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:26.176741 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:26.177445 1071714 pod_ready.go:93] pod "kube-proxy-lvqhn" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:26.177469 1071714 pod_ready.go:82] duration metric: took 399.328165ms for pod "kube-proxy-lvqhn" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:26.177481 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n6xdk" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:26.373839 1071714 request.go:632] Waited for 196.281942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6xdk
	I0819 20:47:26.373923 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6xdk
	I0819 20:47:26.373930 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:26.373938 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:26.373942 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:26.376747 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:26.573686 1071714 request.go:632] Waited for 196.281253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:26.573748 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:26.573759 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:26.573771 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:26.573779 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:26.576860 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:26.577732 1071714 pod_ready.go:98] node "ha-876838" hosting pod "kube-proxy-n6xdk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:26.577766 1071714 pod_ready.go:82] duration metric: took 400.272552ms for pod "kube-proxy-n6xdk" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:26.577799 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "kube-proxy-n6xdk" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:26.577808 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-876838" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:26.774059 1071714 request.go:632] Waited for 196.174053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838
	I0819 20:47:26.774122 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838
	I0819 20:47:26.774133 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:26.774142 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:26.774148 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:26.788142 1071714 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0819 20:47:26.974350 1071714 request.go:632] Waited for 185.123364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:26.974420 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838
	I0819 20:47:26.974430 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:26.974439 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:26.974443 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:26.982226 1071714 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0819 20:47:26.982913 1071714 pod_ready.go:98] node "ha-876838" hosting pod "kube-scheduler-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:26.982935 1071714 pod_ready.go:82] duration metric: took 405.119137ms for pod "kube-scheduler-ha-876838" in "kube-system" namespace to be "Ready" ...
	E0819 20:47:26.982945 1071714 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-876838" hosting pod "kube-scheduler-ha-876838" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-876838" has status "Ready":"Unknown"
	I0819 20:47:26.982951 1071714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:27.174353 1071714 request.go:632] Waited for 191.306037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m02
	I0819 20:47:27.174417 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-876838-m02
	I0819 20:47:27.174424 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:27.174432 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:27.174436 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:27.177452 1071714 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 20:47:27.374640 1071714 request.go:632] Waited for 196.364838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:27.374701 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-876838-m02
	I0819 20:47:27.374728 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:27.374737 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:27.374742 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:27.377768 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:27.378462 1071714 pod_ready.go:93] pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 20:47:27.378485 1071714 pod_ready.go:82] duration metric: took 395.518126ms for pod "kube-scheduler-ha-876838-m02" in "kube-system" namespace to be "Ready" ...
	I0819 20:47:27.378498 1071714 pod_ready.go:39] duration metric: took 23.218103927s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 20:47:27.378515 1071714 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 20:47:27.378578 1071714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:47:27.391856 1071714 system_svc.go:56] duration metric: took 13.332272ms WaitForService to wait for kubelet
	I0819 20:47:27.391889 1071714 kubeadm.go:582] duration metric: took 29.873821721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 20:47:27.391922 1071714 node_conditions.go:102] verifying NodePressure condition ...
	I0819 20:47:27.574313 1071714 request.go:632] Waited for 182.313202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0819 20:47:27.574396 1071714 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0819 20:47:27.574402 1071714 round_trippers.go:469] Request Headers:
	I0819 20:47:27.574411 1071714 round_trippers.go:473]     Accept: application/json, */*
	I0819 20:47:27.574417 1071714 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0819 20:47:27.577933 1071714 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 20:47:27.579704 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:47:27.579733 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:47:27.579744 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:47:27.579751 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:47:27.579755 1071714 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 20:47:27.579760 1071714 node_conditions.go:123] node cpu capacity is 2
	I0819 20:47:27.579764 1071714 node_conditions.go:105] duration metric: took 187.836606ms to run NodePressure ...
	I0819 20:47:27.579796 1071714 start.go:241] waiting for startup goroutines ...
	I0819 20:47:27.579825 1071714 start.go:255] writing updated cluster config ...
	I0819 20:47:27.580194 1071714 ssh_runner.go:195] Run: rm -f paused
	I0819 20:47:27.650739 1071714 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 20:47:27.655663 1071714 out.go:177] * Done! kubectl is now configured to use "ha-876838" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 20:46:42 ha-876838 crio[645]: time="2024-08-19 20:46:42.863087556Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f41a4a1c628fa9809e8b3f6444a4d85a74bb2569bda7a54d7af017ab4d7f4d6a/merged/etc/group: no such file or directory"
	Aug 19 20:46:42 ha-876838 crio[645]: time="2024-08-19 20:46:42.922043854Z" level=info msg="Created container b82f09a3b842e0aa424ec457d4b594d6da95e317e425bf1a9cb71782c5bd88cc: kube-system/storage-provisioner/storage-provisioner" id=db30abe0-f709-45d8-a3c3-bfb8367ba1b3 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 20:46:42 ha-876838 crio[645]: time="2024-08-19 20:46:42.922588187Z" level=info msg="Starting container: b82f09a3b842e0aa424ec457d4b594d6da95e317e425bf1a9cb71782c5bd88cc" id=380d8a2f-b6bc-4bc2-8005-b9e97a311d0d name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 20:46:42 ha-876838 crio[645]: time="2024-08-19 20:46:42.941163128Z" level=info msg="Started container" PID=1847 containerID=b82f09a3b842e0aa424ec457d4b594d6da95e317e425bf1a9cb71782c5bd88cc description=kube-system/storage-provisioner/storage-provisioner id=380d8a2f-b6bc-4bc2-8005-b9e97a311d0d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c173f339aa55b789de89f40e9540854f0039887c10ad10539038d8f6137ad64c
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.587177532Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=738c3e01-a8b9-466d-9fca-2134caf3085b name=/runtime.v1.ImageService/ImageStatus
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.587383397Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=738c3e01-a8b9-466d-9fca-2134caf3085b name=/runtime.v1.ImageService/ImageStatus
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.588345490Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=913375f5-8dba-4dc8-8e4e-971c6d2e69db name=/runtime.v1.ImageService/ImageStatus
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.588548508Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=913375f5-8dba-4dc8-8e4e-971c6d2e69db name=/runtime.v1.ImageService/ImageStatus
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.590293980Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-876838/kube-controller-manager" id=1a7c6403-aae2-44da-9cce-b70a511c6139 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.590638060Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.671228140Z" level=info msg="Created container 998e3ca584f48c6dd18ab4afaf44e842021a349c3904d8bb80cdf1e13148171c: kube-system/kube-controller-manager-ha-876838/kube-controller-manager" id=1a7c6403-aae2-44da-9cce-b70a511c6139 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.671899642Z" level=info msg="Starting container: 998e3ca584f48c6dd18ab4afaf44e842021a349c3904d8bb80cdf1e13148171c" id=cca1b5da-30bb-4056-91b0-4ecb269ef941 name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.682154154Z" level=info msg="Started container" PID=1887 containerID=998e3ca584f48c6dd18ab4afaf44e842021a349c3904d8bb80cdf1e13148171c description=kube-system/kube-controller-manager-ha-876838/kube-controller-manager id=cca1b5da-30bb-4056-91b0-4ecb269ef941 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66f143cc3f4126c3d733ecb9bd3017189e3a8f728da6e26a4a5fa33ac40b71b5
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.880490170Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.888657189Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.888703999Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.888724471Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.893754513Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.893809257Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.893836325Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.899134796Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.899184855Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.899239690Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.912637382Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 20:46:52 ha-876838 crio[645]: time="2024-08-19 20:46:52.912701430Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	998e3ca584f48       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   37 seconds ago       Running             kube-controller-manager   8                   66f143cc3f412       kube-controller-manager-ha-876838
	b82f09a3b842e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   47 seconds ago       Running             storage-provisioner       4                   c173f339aa55b       storage-provisioner
	ebff463fe6109       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   48 seconds ago       Running             kube-vip                  3                   ed666fde31cc6       kube-vip-ha-876838
	d2638e4e0e209       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   52 seconds ago       Running             kube-apiserver            4                   c5560903ba95c       kube-apiserver-ha-876838
	897dedac43d2d       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   59ce63f9fe44f       kindnet-tfw52
	acf968a4f7f1c       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   946c41bc3cb38       coredns-6f6b679f8f-d2bzw
	7e9eed0facd62       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   b1fd308909286       busybox-7dff88458-vwtq8
	89829b651ad50       71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89   About a minute ago   Running             kube-proxy                2                   f614dfc8eead4       kube-proxy-n6xdk
	7ec2b1fdfa775       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   2677b5b8d6efb       coredns-6f6b679f8f-m4zj2
	3dfa031c1a7ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   c173f339aa55b       storage-provisioner
	2096622006cc4       fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd   About a minute ago   Exited              kube-controller-manager   7                   66f143cc3f412       kube-controller-manager-ha-876838
	28d896348c477       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   2 minutes ago        Running             etcd                      2                   165bd2128a7ff       etcd-ha-876838
	6cdfd62f62e69       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   2 minutes ago        Exited              kube-vip                  2                   ed666fde31cc6       kube-vip-ha-876838
	c9ae5bea54a02       fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb   2 minutes ago        Running             kube-scheduler            2                   e770c86e165ac       kube-scheduler-ha-876838
	b55f83465c4b2       cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388   2 minutes ago        Exited              kube-apiserver            3                   c5560903ba95c       kube-apiserver-ha-876838
	
	
	==> coredns [7ec2b1fdfa775c8ec779dec15c54e410dffc96daa9fa0d3f1555f70c35e6f284] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54728 - 22738 "HINFO IN 7940657511817762691.4317864579729729330. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039977652s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[807619880]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.523) (total time: 30000ms):
	Trace[807619880]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:46:42.524)
	Trace[807619880]: [30.000969324s] [30.000969324s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[708419104]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.523) (total time: 30000ms):
	Trace[708419104]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:46:42.524)
	Trace[708419104]: [30.000659731s] [30.000659731s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1192519985]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.523) (total time: 30001ms):
	Trace[1192519985]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:46:42.525)
	Trace[1192519985]: [30.001455433s] [30.001455433s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [acf968a4f7f1cb3854d3aedbd9073d6de002a56996d0561aa0a345c691ee8197] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58320 - 228 "HINFO IN 260246319201140018.882500639988919525. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.004866064s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[188246409]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.759) (total time: 30001ms):
	Trace[188246409]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:46:42.760)
	Trace[188246409]: [30.001668238s] [30.001668238s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1416564455]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.759) (total time: 30001ms):
	Trace[1416564455]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:46:42.761)
	Trace[1416564455]: [30.001319663s] [30.001319663s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[435635]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 20:46:12.760) (total time: 30001ms):
	Trace[435635]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (20:46:42.761)
	Trace[435635]: [30.00153484s] [30.00153484s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-876838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-876838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=ha-876838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T20_36_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:36:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-876838
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:47:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 20:46:09 +0000   Mon, 19 Aug 2024 20:47:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 20:46:09 +0000   Mon, 19 Aug 2024 20:47:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 20:46:09 +0000   Mon, 19 Aug 2024 20:47:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 20:46:09 +0000   Mon, 19 Aug 2024 20:47:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-876838
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c19cb823b4a4150ba6d68cfc3898436
	  System UUID:                c6543383-e50c-417d-b759-12bcfdb00880
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vwtq8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 coredns-6f6b679f8f-d2bzw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 coredns-6f6b679f8f-m4zj2             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-ha-876838                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-tfw52                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-876838             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-876838    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-n6xdk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-876838             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-876838                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 5m10s                kube-proxy       
	  Normal   Starting                 11m                  kube-proxy       
	  Normal   Starting                 77s                  kube-proxy       
	  Warning  CgroupV1                 11m                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 11m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m                  kubelet          Node ha-876838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                  kubelet          Node ha-876838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                  kubelet          Node ha-876838 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                  node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   NodeReady                10m                  kubelet          Node ha-876838 status is now: NodeReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   RegisteredNode           9m29s                node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   NodeHasSufficientPID     6m2s (x7 over 6m2s)  kubelet          Node ha-876838 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  6m2s (x8 over 6m2s)  kubelet          Node ha-876838 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m2s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet          Node ha-876838 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m21s                node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   RegisteredNode           4m28s                node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   RegisteredNode           3m42s                node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node ha-876838 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node ha-876838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node ha-876838 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           87s                  node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   RegisteredNode           35s                  node-controller  Node ha-876838 event: Registered Node ha-876838 in Controller
	  Normal   NodeNotReady             6s                   node-controller  Node ha-876838 status is now: NodeNotReady
	
	
	Name:               ha-876838-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-876838-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=ha-876838
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T20_36_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:36:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-876838-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:47:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 20:46:01 +0000   Mon, 19 Aug 2024 20:36:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 20:46:01 +0000   Mon, 19 Aug 2024 20:36:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 20:46:01 +0000   Mon, 19 Aug 2024 20:36:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 20:46:01 +0000   Mon, 19 Aug 2024 20:37:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-876838-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 1528e719d49c46f9967364ace677ac4a
	  System UUID:                759233ef-b344-4940-b2e0-5fbc56a21428
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6klbz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 etcd-ha-876838-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-4vxdq                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-876838-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-876838-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-d6lm2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-876838-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-876838-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 86s                    kube-proxy       
	  Normal   Starting                 4m44s                  kube-proxy       
	  Normal   Starting                 6m51s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-876838-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-876838-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-876838-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                    node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   RegisteredNode           9m29s                  node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   NodeHasSufficientMemory  7m28s (x8 over 7m28s)  kubelet          Node ha-876838-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m28s (x8 over 7m28s)  kubelet          Node ha-876838-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 7m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m28s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     7m28s (x7 over 7m28s)  kubelet          Node ha-876838-m02 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 6m                     kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 6m                     kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m (x8 over 6m)        kubelet          Node ha-876838-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet          Node ha-876838-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m (x7 over 6m)        kubelet          Node ha-876838-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m21s                  node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Warning  CgroupV1                 2m6s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m6s (x8 over 2m6s)    kubelet          Node ha-876838-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m6s (x8 over 2m6s)    kubelet          Node ha-876838-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m6s (x7 over 2m6s)    kubelet          Node ha-876838-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           87s                    node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-876838-m02 event: Registered Node ha-876838-m02 in Controller
	
	
	Name:               ha-876838-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-876838-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7253360125032c7e2214e25ff4b5c894ae5844e8
	                    minikube.k8s.io/name=ha-876838
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T20_39_06_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 20:39:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-876838-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 20:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 20:47:03 +0000   Mon, 19 Aug 2024 20:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 20:47:03 +0000   Mon, 19 Aug 2024 20:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 20:47:03 +0000   Mon, 19 Aug 2024 20:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 20:47:03 +0000   Mon, 19 Aug 2024 20:47:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-876838-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 e7e6c3c4dc554dca8f61ca8f7defcc50
	  System UUID:                4a13111c-dda0-49f6-81e7-db75dc0afc1e
	  Boot ID:                    6e682a37-9512-4f3a-882d-7e45a79a9483
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-87b7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kindnet-ffzz7              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m25s
	  kube-system                 kube-proxy-lvqhn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m5s                   kube-proxy       
	  Normal   Starting                 8m23s                  kube-proxy       
	  Normal   Starting                 21s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    8m26s (x2 over 8m26s)  kubelet          Node ha-876838-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  8m26s (x2 over 8m26s)  kubelet          Node ha-876838-m04 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m26s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     8m26s (x2 over 8m26s)  kubelet          Node ha-876838-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m24s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   RegisteredNode           8m24s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   RegisteredNode           8m24s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   NodeReady                8m11s                  kubelet          Node ha-876838-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m21s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   NodeNotReady             4m41s                  node-controller  Node ha-876838-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   Starting                 3m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m27s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m21s (x7 over 3m27s)  kubelet          Node ha-876838-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m14s (x8 over 3m27s)  kubelet          Node ha-876838-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m14s (x8 over 3m27s)  kubelet          Node ha-876838-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           87s                    node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   NodeNotReady             47s                    node-controller  Node ha-876838-m04 status is now: NodeNotReady
	  Normal   Starting                 39s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           35s                    node-controller  Node ha-876838-m04 event: Registered Node ha-876838-m04 in Controller
	  Normal   NodeHasSufficientPID     33s (x7 over 39s)      kubelet          Node ha-876838-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    27s (x8 over 39s)      kubelet          Node ha-876838-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  27s (x8 over 39s)      kubelet          Node ha-876838-m04 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	
	
	==> etcd [28d896348c477ce62c5867f93343ca4754149556680a07f43ea02de5873016f9] <==
	{"level":"info","ts":"2024-08-19T20:45:57.806030Z","caller":"traceutil/trace.go:171","msg":"trace[965270871] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; }","duration":"9.042959638s","start":"2024-08-19T20:45:48.763055Z","end":"2024-08-19T20:45:57.806015Z","steps":["trace[965270871] 'agreement among raft nodes before linearized reading'  (duration: 9.042200686s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.806113Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.763014Z","time spent":"9.043080778s","remote":"127.0.0.1:37080","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.806343Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.086090006s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.806434Z","caller":"traceutil/trace.go:171","msg":"trace[1606850558] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"9.086175494s","start":"2024-08-19T20:45:48.720240Z","end":"2024-08-19T20:45:57.806416Z","steps":["trace[1606850558] 'agreement among raft nodes before linearized reading'  (duration: 9.086089013s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.806510Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.720200Z","time spent":"9.086298906s","remote":"127.0.0.1:36862","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.806565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.100254461s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.806606Z","caller":"traceutil/trace.go:171","msg":"trace[1010724523] range","detail":"{range_begin:/registry/clusterroles; range_end:; }","duration":"9.10029684s","start":"2024-08-19T20:45:48.706304Z","end":"2024-08-19T20:45:57.806601Z","steps":["trace[1010724523] 'agreement among raft nodes before linearized reading'  (duration: 9.100253993s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.806648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.706283Z","time spent":"9.10035892s","remote":"127.0.0.1:37082","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.806713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.100526098s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.806760Z","caller":"traceutil/trace.go:171","msg":"trace[1213287575] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; }","duration":"9.100572818s","start":"2024-08-19T20:45:48.706181Z","end":"2024-08-19T20:45:57.806754Z","steps":["trace[1213287575] 'agreement among raft nodes before linearized reading'  (duration: 9.10052577s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.806804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.706130Z","time spent":"9.100667004s","remote":"127.0.0.1:37110","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":0,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"warn","ts":"2024-08-19T20:45:57.806859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.109071174s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.806905Z","caller":"traceutil/trace.go:171","msg":"trace[818401448] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"9.109117491s","start":"2024-08-19T20:45:48.697780Z","end":"2024-08-19T20:45:57.806897Z","steps":["trace[818401448] 'agreement among raft nodes before linearized reading'  (duration: 9.109070796s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.806970Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.697723Z","time spent":"9.109227554s","remote":"127.0.0.1:37272","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.807023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.1218478s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.807066Z","caller":"traceutil/trace.go:171","msg":"trace[989339240] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"9.121892263s","start":"2024-08-19T20:45:48.685168Z","end":"2024-08-19T20:45:57.807060Z","steps":["trace[989339240] 'agreement among raft nodes before linearized reading'  (duration: 9.121847406s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.807110Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:48.685133Z","time spent":"9.121969555s","remote":"127.0.0.1:37216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.789173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.979665593s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-876838-m02\" ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-08-19T20:45:57.807789Z","caller":"traceutil/trace.go:171","msg":"trace[319206804] range","detail":"{range_begin:/registry/minions/ha-876838-m02; range_end:; }","duration":"9.998277596s","start":"2024-08-19T20:45:47.809501Z","end":"2024-08-19T20:45:57.807779Z","steps":["trace[319206804] 'agreement among raft nodes before linearized reading'  (duration: 9.979665387s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.807869Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:47.809469Z","time spent":"9.998387749s","remote":"127.0.0.1:36936","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":0,"response size":0,"request content":"key:\"/registry/minions/ha-876838-m02\" "}
	{"level":"info","ts":"2024-08-19T20:45:57.797688Z","caller":"etcdserver/v3_server.go:912","msg":"first commit in current term: resending ReadIndex request"}
	{"level":"warn","ts":"2024-08-19T20:45:57.808815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.01635913s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T20:45:57.808898Z","caller":"traceutil/trace.go:171","msg":"trace[511674194] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2569; }","duration":"2.016441952s","start":"2024-08-19T20:45:55.792436Z","end":"2024-08-19T20:45:57.808878Z","steps":["trace[511674194] 'agreement among raft nodes before linearized reading'  (duration: 2.016332718s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T20:45:57.808958Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T20:45:55.792395Z","time spent":"2.016551744s","remote":"127.0.0.1:36814","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T20:45:57.820644Z","caller":"etcdserver/v3_server.go:897","msg":"ignored out-of-date read index response; local node read indexes queueing up and waiting to be in sync with leader","sent-request-id":8128031322204666632,"received-request-id":8128031322204666631}
	
	
	==> kernel <==
	 20:47:30 up  4:29,  0 users,  load average: 1.65, 2.16, 2.08
	Linux ha-876838 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [897dedac43d2d629235ba4d53eb9fd5b403f190a05f9441ff648e6e8b7a51d23] <==
	E0819 20:47:01.889612       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 20:47:02.880443       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:47:02.880478       1 main.go:299] handling current node
	I0819 20:47:02.880496       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 20:47:02.880503       1 main.go:322] Node ha-876838-m02 has CIDR [10.244.1.0/24] 
	I0819 20:47:02.880623       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 20:47:02.880637       1 main.go:322] Node ha-876838-m04 has CIDR [10.244.3.0/24] 
	I0819 20:47:12.880414       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:47:12.880450       1 main.go:299] handling current node
	I0819 20:47:12.880465       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 20:47:12.880471       1 main.go:322] Node ha-876838-m02 has CIDR [10.244.1.0/24] 
	I0819 20:47:12.880605       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 20:47:12.880611       1 main.go:322] Node ha-876838-m04 has CIDR [10.244.3.0/24] 
	W0819 20:47:18.657199       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:47:18.657256       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 20:47:19.526868       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 20:47:19.526903       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 20:47:22.879729       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 20:47:22.879857       1 main.go:299] handling current node
	I0819 20:47:22.879881       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 20:47:22.879913       1 main.go:322] Node ha-876838-m02 has CIDR [10.244.1.0/24] 
	I0819 20:47:22.880023       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 20:47:22.880036       1 main.go:322] Node ha-876838-m04 has CIDR [10.244.3.0/24] 
	W0819 20:47:23.730415       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 20:47:23.730453       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [b55f83465c4b261301df8838c51f57ba2edec69f911ffa68f65ba9b9d5aaca9d] <==
	W0819 20:45:58.244033       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.Lease: storage is (re)initializing
	E0819 20:45:58.244076       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.Lease: failed to list *v1.Lease: storage is (re)initializing" logger="UnhandledError"
	I0819 20:45:58.713282       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0819 20:45:59.542748       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 20:45:59.603953       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 20:45:59.609623       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 20:45:59.716895       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 20:45:59.716949       1 aggregator.go:171] initial CRD sync complete...
	I0819 20:45:59.716957       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 20:45:59.716964       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 20:45:59.716970       1 cache.go:39] Caches are synced for autoregister controller
	I0819 20:45:59.804549       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 20:46:00.102834       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 20:46:00.503343       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0819 20:46:00.516062       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0819 20:46:00.608114       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 20:46:00.608143       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 20:46:00.910699       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 20:46:00.930304       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 20:46:00.930397       1 policy_source.go:224] refreshing policies
	I0819 20:46:00.931614       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 20:46:01.021671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 20:46:01.029398       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 20:46:01.035635       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0819 20:46:36.704204       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [d2638e4e0e2096dec1c90e2ea30725d1910882ef26b28f8ae6c082f3ef3a4013] <==
	I0819 20:46:41.213738       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0819 20:46:41.213795       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0819 20:46:41.325309       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 20:46:41.328082       1 aggregator.go:171] initial CRD sync complete...
	I0819 20:46:41.328109       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 20:46:41.328117       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 20:46:41.374652       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 20:46:41.374849       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 20:46:41.376999       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 20:46:41.413835       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 20:46:41.417106       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 20:46:41.427512       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 20:46:41.435878       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 20:46:41.478949       1 cache.go:39] Caches are synced for autoregister controller
	I0819 20:46:41.512149       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 20:46:41.512184       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 20:46:41.512192       1 policy_source.go:224] refreshing policies
	I0819 20:46:41.515404       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 20:46:41.570005       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 20:46:41.614235       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 20:46:41.614426       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 20:46:42.226220       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 20:46:42.700220       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0819 20:46:42.701843       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 20:46:42.711149       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2096622006cc4f6bf27c6ebe2786c1249f0eebcbdb38538358433d7cea7ae84b] <==
	I0819 20:46:13.361342       1 serving.go:386] Generated self-signed cert in-memory
	I0819 20:46:14.628923       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 20:46:14.628956       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:46:14.630449       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 20:46:14.630655       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 20:46:14.630751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 20:46:14.630864       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 20:46:24.650826       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [998e3ca584f48c6dd18ab4afaf44e842021a349c3904d8bb80cdf1e13148171c] <==
	I0819 20:46:56.076682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838-m04"
	I0819 20:46:56.377835       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 20:46:56.377873       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 20:46:56.402381       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 20:47:03.968465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838-m04"
	I0819 20:47:03.968683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-876838-m04"
	I0819 20:47:03.983341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838-m04"
	I0819 20:47:04.032656       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838-m04"
	I0819 20:47:07.167275       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.793µs"
	I0819 20:47:08.294540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.931µs"
	I0819 20:47:09.385213       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.234007ms"
	I0819 20:47:09.385320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="70.867µs"
	I0819 20:47:24.053789       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-876838-m04"
	I0819 20:47:24.053873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838"
	I0819 20:47:24.102269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838"
	I0819 20:47:24.248825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.532084ms"
	I0819 20:47:24.249037       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.589µs"
	I0819 20:47:26.107202       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838"
	I0819 20:47:26.665169       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="16.816653ms"
	I0819 20:47:26.665360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.013µs"
	I0819 20:47:26.724140       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qm48t EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qm48t\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 20:47:26.724557       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f44c7e9c-d5c2-4a6c-bf7b-f10d6a482f1d", APIVersion:"v1", ResourceVersion:"247", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qm48t EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qm48t": the object has been modified; please apply your changes to the latest version and try again
	I0819 20:47:26.799486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="28.152173ms"
	I0819 20:47:26.799672       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="57.239µs"
	I0819 20:47:29.294454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-876838"
	
	
	==> kube-proxy [89829b651ad503cb40a31b429f75b23cc474ba4b5db9167a72817d357466d47e] <==
	I0819 20:46:12.866355       1 server_linux.go:66] "Using iptables proxy"
	I0819 20:46:13.062033       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 20:46:13.062104       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 20:46:13.100452       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 20:46:13.100528       1 server_linux.go:169] "Using iptables Proxier"
	I0819 20:46:13.102482       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 20:46:13.102947       1 server.go:483] "Version info" version="v1.31.0"
	I0819 20:46:13.102971       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 20:46:13.107478       1 config.go:197] "Starting service config controller"
	I0819 20:46:13.107502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 20:46:13.107518       1 config.go:104] "Starting endpoint slice config controller"
	I0819 20:46:13.107522       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 20:46:13.107870       1 config.go:326] "Starting node config controller"
	I0819 20:46:13.107886       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 20:46:13.208053       1 shared_informer.go:320] Caches are synced for node config
	I0819 20:46:13.208096       1 shared_informer.go:320] Caches are synced for service config
	I0819 20:46:13.208127       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0819 20:47:26.567830       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2752": http2: client connection lost
	E0819 20:47:26.567896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2752\": http2: client connection lost" logger="UnhandledError"
	W0819 20:47:26.567974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2820": http2: client connection lost
	E0819 20:47:26.568003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2820\": http2: client connection lost" logger="UnhandledError"
	W0819 20:47:26.567978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-876838&resourceVersion=2754": http2: client connection lost
	E0819 20:47:26.568113       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-876838&resourceVersion=2754\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [c9ae5bea54a02e18607a21b4faaa3235e2bdf34bfd29d5ea339ceba1b20e2535] <==
	E0819 20:45:57.256309       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 20:45:57.435618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 20:45:57.435670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 20:45:57.552621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 20:45:57.552673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:45:57.966085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 20:45:57.966228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 20:45:58.924893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 20:45:58.924945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 20:46:19.966763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 20:46:41.257997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:36518->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.258081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:36530->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.258125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:36502->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.304802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:36522->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.304903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:36618->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:36634->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:36608->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:36602->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:36592->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:36584->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:36570->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:36556->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:36546->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.305620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:36544->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 20:46:41.427390       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:36650->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 20:47:19 ha-876838 kubelet[760]: E0819 20:47:19.961410     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-876838?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 19 20:47:22 ha-876838 kubelet[760]: E0819 20:47:22.612024     760 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724100442611556343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:47:22 ha-876838 kubelet[760]: E0819 20:47:22.612493     760 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724100442611556343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156833,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592462     760 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2751": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592529     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2751\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592583     760 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2751": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592603     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2751\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592648     760 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-876838&resourceVersion=2856": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592665     760 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-876838&resourceVersion=2856\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592707     760 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2751": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592747     760 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2751\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592792     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2752": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592819     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2752\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592860     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2752": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592884     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2752\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.592923     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2593": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592946     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2593\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.592991     760 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-876838?timeout=10s\": http2: client connection lost"
	Aug 19 20:47:26 ha-876838 kubelet[760]: I0819 20:47:26.593013     760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Aug 19 20:47:26 ha-876838 kubelet[760]: I0819 20:47:26.593168     760 status_manager.go:851] "Failed to get status for pod" podUID="dd247070e4f35420b8247ecff5d7a7da" pod="kube-system/kube-vip-ha-876838" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-876838\": http2: client connection lost"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.593307     760 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2751": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.593333     760 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2751\": http2: client connection lost" logger="UnhandledError"
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.593379     760 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-876838.17ed3c1d587b97d2\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-876838.17ed3c1d587b97d2  kube-system   2635 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-876838,UID:0ee9bf79f1e0ca30b1bd4fb91bd66cea,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-876838,},FirstTimestamp:2024-08-19 20:45:29 +0000 UTC,LastTimestamp:2024-08-19 20:46:37.808844744 +0000 UTC m=+75.418592181,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-876838,}"
	Aug 19 20:47:26 ha-876838 kubelet[760]: W0819 20:47:26.593662     760 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-876838&resourceVersion=2754": http2: client connection lost
	Aug 19 20:47:26 ha-876838 kubelet[760]: E0819 20:47:26.593707     760 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-876838&resourceVersion=2754\": http2: client connection lost" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-876838 -n ha-876838
helpers_test.go:261: (dbg) Run:  kubectl --context ha-876838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (137.21s)

                                                
                                    

Test pass (295/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.37
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.97
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 205.05
31 TestAddons/serial/GCPAuth/Namespaces 0.2
33 TestAddons/parallel/Registry 15.73
35 TestAddons/parallel/InspektorGadget 11.91
39 TestAddons/parallel/CSI 38.16
40 TestAddons/parallel/Headlamp 17.87
41 TestAddons/parallel/CloudSpanner 6.58
42 TestAddons/parallel/LocalPath 52.59
43 TestAddons/parallel/NvidiaDevicePlugin 6.51
44 TestAddons/parallel/Yakd 11.76
45 TestAddons/StoppedEnableDisable 12.34
46 TestCertOptions 35.42
47 TestCertExpiration 261.05
49 TestForceSystemdFlag 44.14
50 TestForceSystemdEnv 43.7
56 TestErrorSpam/setup 31.89
57 TestErrorSpam/start 0.8
58 TestErrorSpam/status 1.11
59 TestErrorSpam/pause 1.84
60 TestErrorSpam/unpause 1.87
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.34
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 25.88
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.13
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.98
73 TestFunctional/serial/CacheCmd/cache/add_local 1.44
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.14
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
81 TestFunctional/serial/ExtraConfig 37.89
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.78
84 TestFunctional/serial/LogsFileCmd 2
85 TestFunctional/serial/InvalidService 4.9
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 9.81
89 TestFunctional/parallel/DryRun 0.47
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/ServiceCmdConnect 11.64
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 26.35
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.16
102 TestFunctional/parallel/FileSync 0.34
103 TestFunctional/parallel/CertSync 2.13
107 TestFunctional/parallel/NodeLabels 0.14
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
111 TestFunctional/parallel/License 0.27
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.48
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
125 TestFunctional/parallel/ProfileCmd/profile_list 0.38
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
127 TestFunctional/parallel/MountCmd/any-port 7.11
128 TestFunctional/parallel/ServiceCmd/List 0.58
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
131 TestFunctional/parallel/ServiceCmd/Format 0.39
132 TestFunctional/parallel/ServiceCmd/URL 0.37
133 TestFunctional/parallel/MountCmd/specific-port 2.41
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.58
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.27
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.38
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.86
142 TestFunctional/parallel/ImageCommands/Setup 0.71
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.6
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.58
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 174.84
160 TestMultiControlPlane/serial/DeployApp 6.91
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 35.31
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.23
166 TestMultiControlPlane/serial/StopSecondaryNode 12.94
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.91
168 TestMultiControlPlane/serial/RestartSecondaryNode 33.58
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 9.29
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 223.13
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.8
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 35.77
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
176 TestMultiControlPlane/serial/AddSecondaryNode 46.36
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.76
181 TestJSONOutput/start/Command 52.23
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.25
206 TestKicCustomNetwork/create_custom_network 40.86
207 TestKicCustomNetwork/use_default_bridge_network 34.16
208 TestKicExistingNetwork 33.04
209 TestKicCustomSubnet 36.94
210 TestKicStaticIP 32.91
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 67.7
215 TestMountStart/serial/StartWithMountFirst 9.35
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 7.38
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.67
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.23
222 TestMountStart/serial/RestartStopped 7.91
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 82.23
227 TestMultiNode/serial/DeployApp2Nodes 4.95
228 TestMultiNode/serial/PingHostFrom2Pods 1.02
229 TestMultiNode/serial/AddNode 31.26
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 9.97
233 TestMultiNode/serial/StopNode 2.21
234 TestMultiNode/serial/StartAfterStop 9.94
235 TestMultiNode/serial/RestartKeepsNodes 102.08
236 TestMultiNode/serial/DeleteNode 5.56
237 TestMultiNode/serial/StopMultiNode 23.96
238 TestMultiNode/serial/RestartMultiNode 54.65
239 TestMultiNode/serial/ValidateNameConflict 36.55
244 TestPreload 139.65
246 TestScheduledStopUnix 105.31
249 TestInsufficientStorage 10.62
250 TestRunningBinaryUpgrade 77.05
252 TestKubernetesUpgrade 391.14
253 TestMissingContainerUpgrade 158.08
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 41.12
257 TestNoKubernetes/serial/StartWithStopK8s 9.67
258 TestNoKubernetes/serial/Start 7.92
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
260 TestNoKubernetes/serial/ProfileList 1.09
261 TestNoKubernetes/serial/Stop 1.29
262 TestNoKubernetes/serial/StartNoArgs 8.12
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
264 TestStoppedBinaryUpgrade/Setup 1.46
265 TestStoppedBinaryUpgrade/Upgrade 85.03
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
275 TestPause/serial/Start 51.85
276 TestPause/serial/SecondStartNoReconfiguration 27.05
277 TestPause/serial/Pause 0.95
278 TestPause/serial/VerifyStatus 0.37
279 TestPause/serial/Unpause 0.69
280 TestPause/serial/PauseAgain 0.85
281 TestPause/serial/DeletePaused 2.63
282 TestPause/serial/VerifyDeletedResources 0.34
290 TestNetworkPlugins/group/false 5.33
295 TestStartStop/group/old-k8s-version/serial/FirstStart 180.58
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.68
298 TestStartStop/group/no-preload/serial/FirstStart 71
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.41
300 TestStartStop/group/old-k8s-version/serial/Stop 13.72
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.31
302 TestStartStop/group/old-k8s-version/serial/SecondStart 371.4
303 TestStartStop/group/no-preload/serial/DeployApp 9.41
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
305 TestStartStop/group/no-preload/serial/Stop 12.01
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 267.09
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/no-preload/serial/Pause 3.15
313 TestStartStop/group/embed-certs/serial/FirstStart 55.38
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
317 TestStartStop/group/old-k8s-version/serial/Pause 3.67
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.75
320 TestStartStop/group/embed-certs/serial/DeployApp 9.48
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.66
322 TestStartStop/group/embed-certs/serial/Stop 12.11
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/embed-certs/serial/SecondStart 297.13
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.62
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.89
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 279.58
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
334 TestStartStop/group/embed-certs/serial/Pause 3.08
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
337 TestStartStop/group/newest-cni/serial/FirstStart 47.99
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.42
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.93
340 TestNetworkPlugins/group/auto/Start 58.15
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.39
343 TestStartStop/group/newest-cni/serial/Stop 1.4
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
345 TestStartStop/group/newest-cni/serial/SecondStart 17.6
346 TestNetworkPlugins/group/auto/KubeletFlags 0.4
347 TestNetworkPlugins/group/auto/NetCatPod 12.55
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
351 TestStartStop/group/newest-cni/serial/Pause 3.96
352 TestNetworkPlugins/group/kindnet/Start 55.6
353 TestNetworkPlugins/group/auto/DNS 0.25
354 TestNetworkPlugins/group/auto/Localhost 0.18
355 TestNetworkPlugins/group/auto/HairPin 0.18
356 TestNetworkPlugins/group/calico/Start 66.63
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.27
360 TestNetworkPlugins/group/kindnet/DNS 0.21
361 TestNetworkPlugins/group/kindnet/Localhost 0.18
362 TestNetworkPlugins/group/kindnet/HairPin 0.21
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/Start 65.26
365 TestNetworkPlugins/group/calico/KubeletFlags 0.4
366 TestNetworkPlugins/group/calico/NetCatPod 13.39
367 TestNetworkPlugins/group/calico/DNS 0.25
368 TestNetworkPlugins/group/calico/Localhost 0.19
369 TestNetworkPlugins/group/calico/HairPin 0.18
370 TestNetworkPlugins/group/enable-default-cni/Start 75.3
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.36
373 TestNetworkPlugins/group/custom-flannel/DNS 0.19
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
376 TestNetworkPlugins/group/flannel/Start 57.82
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
382 TestNetworkPlugins/group/bridge/Start 75.94
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
385 TestNetworkPlugins/group/flannel/NetCatPod 13.37
386 TestNetworkPlugins/group/flannel/DNS 0.26
387 TestNetworkPlugins/group/flannel/Localhost 0.22
388 TestNetworkPlugins/group/flannel/HairPin 0.22
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.28
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-995943 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-995943 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.367292124s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-995943
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-995943: exit status 85 (71.042352ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-995943 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |          |
	|         | -p download-only-995943        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:09.817029 1011468 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:09.817183 1011468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:09.817195 1011468 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:09.817200 1011468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:09.817467 1011468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	W0819 20:21:09.817632 1011468 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-1006087/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-1006087/.minikube/config/config.json: no such file or directory
	I0819 20:21:09.818077 1011468 out.go:352] Setting JSON to true
	I0819 20:21:09.818978 1011468 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14611,"bootTime":1724084259,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:21:09.819059 1011468 start.go:139] virtualization:  
	I0819 20:21:09.821563 1011468 out.go:97] [download-only-995943] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 20:21:09.821799 1011468 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 20:21:09.821871 1011468 notify.go:220] Checking for updates...
	I0819 20:21:09.823105 1011468 out.go:169] MINIKUBE_LOCATION=19423
	I0819 20:21:09.824919 1011468 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:09.826509 1011468 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:21:09.827907 1011468 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:21:09.829732 1011468 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 20:21:09.832513 1011468 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 20:21:09.832779 1011468 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:09.854242 1011468 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:09.854356 1011468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:09.927525 1011468 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:21:09.917303067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:09.927729 1011468 docker.go:307] overlay module found
	I0819 20:21:09.929195 1011468 out.go:97] Using the docker driver based on user configuration
	I0819 20:21:09.929248 1011468 start.go:297] selected driver: docker
	I0819 20:21:09.929256 1011468 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:09.929384 1011468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:09.982884 1011468 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:21:09.972769072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:09.983047 1011468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:09.983334 1011468 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 20:21:09.983496 1011468 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 20:21:09.985080 1011468 out.go:169] Using Docker driver with root privileges
	I0819 20:21:09.986470 1011468 cni.go:84] Creating CNI manager for ""
	I0819 20:21:09.986503 1011468 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:09.986519 1011468 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:09.986602 1011468 start.go:340] cluster config:
	{Name:download-only-995943 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-995943 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:09.988306 1011468 out.go:97] Starting "download-only-995943" primary control-plane node in "download-only-995943" cluster
	I0819 20:21:09.988338 1011468 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:21:09.990006 1011468 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:09.990035 1011468 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 20:21:09.990089 1011468 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:10.020484 1011468 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:10.020734 1011468 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:10.020843 1011468 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:10.048938 1011468 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0819 20:21:10.048967 1011468 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:10.049745 1011468 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 20:21:10.051779 1011468 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 20:21:10.051816 1011468 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 20:21:10.140840 1011468 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-995943 host does not exist
	  To start a cluster, run: "minikube start -p download-only-995943"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-995943
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-983156 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-983156 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.965253589s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-983156
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-983156: exit status 85 (69.744209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-995943 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-995943        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| delete  | -p download-only-995943        | download-only-995943 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC | 19 Aug 24 20:21 UTC |
	| start   | -o=json --download-only        | download-only-983156 | jenkins | v1.33.1 | 19 Aug 24 20:21 UTC |                     |
	|         | -p download-only-983156        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 20:21:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 20:21:17.598122 1011672 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:21:17.598268 1011672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:17.598280 1011672 out.go:358] Setting ErrFile to fd 2...
	I0819 20:21:17.598286 1011672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:21:17.598518 1011672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:21:17.598936 1011672 out.go:352] Setting JSON to true
	I0819 20:21:17.599810 1011672 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14619,"bootTime":1724084259,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:21:17.599882 1011672 start.go:139] virtualization:  
	I0819 20:21:17.601721 1011672 out.go:97] [download-only-983156] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:21:17.601936 1011672 notify.go:220] Checking for updates...
	I0819 20:21:17.603272 1011672 out.go:169] MINIKUBE_LOCATION=19423
	I0819 20:21:17.604986 1011672 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:21:17.606698 1011672 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:21:17.607929 1011672 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:21:17.609327 1011672 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 20:21:17.612031 1011672 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 20:21:17.612418 1011672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:21:17.635702 1011672 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:21:17.635820 1011672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:17.702830 1011672 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 20:21:17.692607098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:17.702942 1011672 docker.go:307] overlay module found
	I0819 20:21:17.704637 1011672 out.go:97] Using the docker driver based on user configuration
	I0819 20:21:17.704669 1011672 start.go:297] selected driver: docker
	I0819 20:21:17.704676 1011672 start.go:901] validating driver "docker" against <nil>
	I0819 20:21:17.704843 1011672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:21:17.762224 1011672 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 20:21:17.752300842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:21:17.762410 1011672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 20:21:17.762716 1011672 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 20:21:17.762886 1011672 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 20:21:17.764654 1011672 out.go:169] Using Docker driver with root privileges
	I0819 20:21:17.766018 1011672 cni.go:84] Creating CNI manager for ""
	I0819 20:21:17.766040 1011672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 20:21:17.766060 1011672 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 20:21:17.766149 1011672 start.go:340] cluster config:
	{Name:download-only-983156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-983156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:21:17.767701 1011672 out.go:97] Starting "download-only-983156" primary control-plane node in "download-only-983156" cluster
	I0819 20:21:17.767733 1011672 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 20:21:17.769151 1011672 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 20:21:17.769179 1011672 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:17.769353 1011672 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 20:21:17.785573 1011672 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 20:21:17.785723 1011672 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 20:21:17.785749 1011672 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 20:21:17.785755 1011672 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 20:21:17.785767 1011672 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 20:21:17.829903 1011672 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	I0819 20:21:17.829936 1011672 cache.go:56] Caching tarball of preloaded images
	I0819 20:21:17.830115 1011672 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 20:21:17.831734 1011672 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 20:21:17.831765 1011672 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4 ...
	I0819 20:21:17.913664 1011672 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e6af375765e1700a37be5f07489fb80e -> /home/jenkins/minikube-integration/19423-1006087/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-983156 host does not exist
	  To start a cluster, run: "minikube start -p download-only-983156"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-983156
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-526736 --alsologtostderr --binary-mirror http://127.0.0.1:38541 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-526736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-526736
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-199708
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-199708: exit status 85 (66.070351ms)

                                                
                                                
-- stdout --
	* Profile "addons-199708" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-199708"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-199708
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-199708: exit status 85 (66.828632ms)

                                                
                                                
-- stdout --
	* Profile "addons-199708" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-199708"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (205.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-199708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-199708 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m25.051100173s)
--- PASS: TestAddons/Setup (205.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-199708 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-199708 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.859934ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-2d8zw" [571a9575-3986-40cc-80d1-071415cf3a04] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004612801s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mtrlv" [fe09b5f8-66ed-4907-8d46-d177a6e3922f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004583306s
addons_test.go:342: (dbg) Run:  kubectl --context addons-199708 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-199708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-199708 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.779452343s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 ip
2024/08/19 20:25:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z42z9" [d7cb41f0-7e24-43e5-8cf0-2a6af1ab15fc] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004821423s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-199708
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-199708: (5.903531073s)
--- PASS: TestAddons/parallel/InspektorGadget (11.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 12.234528ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-199708 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-199708 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [af4900bc-b893-419d-8f25-af18be39b9d4] Pending
helpers_test.go:344: "task-pv-pod" [af4900bc-b893-419d-8f25-af18be39b9d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [af4900bc-b893-419d-8f25-af18be39b9d4] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.010004989s
addons_test.go:590: (dbg) Run:  kubectl --context addons-199708 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-199708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-199708 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-199708 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-199708 delete pod task-pv-pod: (1.194012854s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-199708 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-199708 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-199708 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c593978b-4af7-42c9-8bb5-1c5005020a18] Pending
helpers_test.go:344: "task-pv-pod-restore" [c593978b-4af7-42c9-8bb5-1c5005020a18] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c593978b-4af7-42c9-8bb5-1c5005020a18] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003601207s
addons_test.go:632: (dbg) Run:  kubectl --context addons-199708 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-199708 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-199708 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.783771491s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-199708 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-199708 --alsologtostderr -v=1: (1.051147635s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-fq5dk" [f007e71a-54e4-46c0-9611-44ca2ec6a83d] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-fq5dk" [f007e71a-54e4-46c0-9611-44ca2ec6a83d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-fq5dk" [f007e71a-54e4-46c0-9611-44ca2ec6a83d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003154331s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable headlamp --alsologtostderr -v=1: (5.811167512s)
--- PASS: TestAddons/parallel/Headlamp (17.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-kbtbm" [66af26f9-2cc8-4b5c-a5da-7da60a36bf25] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003800998s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-199708
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-199708 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-199708 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [70e2a124-48bf-4d8f-907b-6da90be0e565] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [70e2a124-48bf-4d8f-907b-6da90be0e565] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [70e2a124-48bf-4d8f-907b-6da90be0e565] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00412035s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-199708 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 ssh "cat /opt/local-path-provisioner/pvc-da75018b-e55e-4bcd-afd0-fef3a5381dbe_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-199708 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-199708 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.466202546s)
--- PASS: TestAddons/parallel/LocalPath (52.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6p75r" [03198291-96ab-4c9c-8393-70aa68bb887b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004121089s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-199708
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qdghg" [552be655-74db-4ad3-bb0a-970667193970] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003683117s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-199708 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-199708 addons disable yakd --alsologtostderr -v=1: (5.75071544s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-199708
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-199708: (11.990514863s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-199708
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-199708
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-199708
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (35.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-795138 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-795138 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.744185142s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-795138 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-795138 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-795138 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-795138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-795138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-795138: (1.996000732s)
--- PASS: TestCertOptions (35.42s)

                                                
                                    
x
+
TestCertExpiration (261.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-489740 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0819 21:12:56.619217 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-489740 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.535910971s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-489740 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-489740 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (36.143821617s)
helpers_test.go:175: Cleaning up "cert-expiration-489740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-489740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-489740: (2.373025075s)
--- PASS: TestCertExpiration (261.05s)

                                                
                                    
x
+
TestForceSystemdFlag (44.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-985882 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-985882 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.181633957s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-985882 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-985882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-985882
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-985882: (2.589637907s)
--- PASS: TestForceSystemdFlag (44.14s)

                                                
                                    
x
+
TestForceSystemdEnv (43.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-312483 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-312483 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.332865162s)
helpers_test.go:175: Cleaning up "force-systemd-env-312483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-312483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-312483: (2.369357417s)
--- PASS: TestForceSystemdEnv (43.70s)

                                                
                                    
x
+
TestErrorSpam/setup (31.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-914294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914294 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-914294 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914294 --driver=docker  --container-runtime=crio: (31.888687864s)
--- PASS: TestErrorSpam/setup (31.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 stop: (1.256953186s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914294 --log_dir /tmp/nospam-914294 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-1006087/.minikube/files/etc/test/nested/copy/1011462/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-915934 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (54.337198458s)
--- PASS: TestFunctional/serial/StartWithProxy (54.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-915934 --alsologtostderr -v=8: (25.874377863s)
functional_test.go:663: soft start took 25.87558002s for "functional-915934" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-915934 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:3.1: (1.604967925s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:3.3: (1.563860742s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 cache add registry.k8s.io/pause:latest: (1.81537958s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-915934 /tmp/TestFunctionalserialCacheCmdcacheadd_local2979995960/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache add minikube-local-cache-test:functional-915934
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache delete minikube-local-cache-test:functional-915934
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-915934
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.499153ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 cache reload: (1.2283467s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 kubectl -- --context functional-915934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-915934 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-915934 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.892132189s)
functional_test.go:761: restart took 37.892244647s for "functional-915934" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-915934 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 logs: (1.781826893s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 logs --file /tmp/TestFunctionalserialLogsFileCmd3996584409/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 logs --file /tmp/TestFunctionalserialLogsFileCmd3996584409/001/logs.txt: (1.99955227s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.00s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-915934 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-915934
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-915934: exit status 115 (562.279232ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32376 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-915934 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-915934 delete -f testdata/invalidsvc.yaml: (1.072024315s)
--- PASS: TestFunctional/serial/InvalidService (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config get cpus
E0819 20:34:51.920618 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:51.927475 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:51.939471 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 config get cpus: exit status 14 (74.33161ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config set cpus 2
E0819 20:34:51.961821 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:52.003708 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config get cpus
E0819 20:34:52.085769 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 config get cpus
E0819 20:34:52.247572 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 config get cpus: exit status 14 (86.946599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-915934 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-915934 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1038796: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-915934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.967113ms)

                                                
                                                
-- stdout --
	* [functional-915934] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:35:26.102594 1038353 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:35:26.102756 1038353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:35:26.102768 1038353 out.go:358] Setting ErrFile to fd 2...
	I0819 20:35:26.102773 1038353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:35:26.103032 1038353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:35:26.103437 1038353 out.go:352] Setting JSON to false
	I0819 20:35:26.104392 1038353 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15467,"bootTime":1724084259,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:35:26.104462 1038353 start.go:139] virtualization:  
	I0819 20:35:26.107486 1038353 out.go:177] * [functional-915934] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 20:35:26.110891 1038353 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:35:26.111001 1038353 notify.go:220] Checking for updates...
	I0819 20:35:26.116339 1038353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:35:26.119092 1038353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:35:26.121738 1038353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:35:26.124351 1038353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:35:26.126943 1038353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:35:26.130085 1038353 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:35:26.130629 1038353 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:35:26.158468 1038353 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:35:26.158590 1038353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:35:26.224220 1038353 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:35:26.213769577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:35:26.224362 1038353 docker.go:307] overlay module found
	I0819 20:35:26.227280 1038353 out.go:177] * Using the docker driver based on existing profile
	I0819 20:35:26.229957 1038353 start.go:297] selected driver: docker
	I0819 20:35:26.229983 1038353 start.go:901] validating driver "docker" against &{Name:functional-915934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-915934 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:35:26.230119 1038353 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:35:26.233439 1038353 out.go:201] 
	W0819 20:35:26.236252 1038353 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 20:35:26.238818 1038353 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-915934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-915934 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.608638ms)

                                                
                                                
-- stdout --
	* [functional-915934] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:35:25.898874 1038308 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:35:25.899044 1038308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:35:25.899065 1038308 out.go:358] Setting ErrFile to fd 2...
	I0819 20:35:25.899083 1038308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:35:25.899458 1038308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:35:25.899973 1038308 out.go:352] Setting JSON to false
	I0819 20:35:25.901003 1038308 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15467,"bootTime":1724084259,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 20:35:25.901104 1038308 start.go:139] virtualization:  
	I0819 20:35:25.905313 1038308 out.go:177] * [functional-915934] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 20:35:25.909014 1038308 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 20:35:25.909236 1038308 notify.go:220] Checking for updates...
	I0819 20:35:25.916554 1038308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 20:35:25.919323 1038308 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 20:35:25.922075 1038308 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 20:35:25.925110 1038308 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 20:35:25.928501 1038308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 20:35:25.932819 1038308 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:35:25.933342 1038308 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 20:35:25.966508 1038308 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 20:35:25.966620 1038308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:35:26.032834 1038308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 20:35:26.022611688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:35:26.032957 1038308 docker.go:307] overlay module found
	I0819 20:35:26.035842 1038308 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 20:35:26.038718 1038308 start.go:297] selected driver: docker
	I0819 20:35:26.038746 1038308 start.go:901] validating driver "docker" against &{Name:functional-915934 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-915934 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 20:35:26.038859 1038308 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 20:35:26.042276 1038308 out.go:201] 
	W0819 20:35:26.045059 1038308 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 20:35:26.047722 1038308 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-915934 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-915934 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-7xvf8" [88fbcf00-9b67-4ebd-b7ae-ee9f26175b37] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-7xvf8" [88fbcf00-9b67-4ebd-b7ae-ee9f26175b37] Running
E0819 20:35:12.417463 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00428834s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32273
functional_test.go:1675: http://192.168.49.2:32273: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-7xvf8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32273
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b7df0152-7277-4cf9-96eb-f079f68cfb58] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004162928s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-915934 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-915934 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-915934 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-915934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a43b231-36f7-4d88-bf74-8d53ce06b102] Pending
helpers_test.go:344: "sp-pod" [3a43b231-36f7-4d88-bf74-8d53ce06b102] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0819 20:35:02.175989 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [3a43b231-36f7-4d88-bf74-8d53ce06b102] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003875404s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-915934 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-915934 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-915934 delete -f testdata/storage-provisioner/pod.yaml: (1.000702746s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-915934 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5dd26e2f-5b77-4f04-b08d-47f50e27f619] Pending
helpers_test.go:344: "sp-pod" [5dd26e2f-5b77-4f04-b08d-47f50e27f619] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003857101s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-915934 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh -n functional-915934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cp functional-915934:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4163173037/001/cp-test.txt
E0819 20:34:52.568785 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh -n functional-915934 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0819 20:34:53.210917 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh -n functional-915934 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1011462/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /etc/test/nested/copy/1011462/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1011462.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /etc/ssl/certs/1011462.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1011462.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /usr/share/ca-certificates/1011462.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10114622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /etc/ssl/certs/10114622.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10114622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /usr/share/ca-certificates/10114622.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-915934 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "sudo systemctl is-active docker": exit status 1 (356.020558ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "sudo systemctl is-active containerd": exit status 1 (333.436649ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1036005: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-915934 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8cd127e7-51dc-4841-b871-5fadfb0ec19d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0819 20:34:54.492286 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:34:57.054409 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [8cd127e7-51dc-4841-b871-5fadfb0ec19d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00416521s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-915934 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.106.169 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-915934 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-915934 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-915934 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7krjt" [a758a355-a6c6-4c0f-8e32-5050caf1666d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7krjt" [a758a355-a6c6-4c0f-8e32-5050caf1666d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004905824s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "320.42009ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.437055ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "326.044648ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.290329ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdany-port1469611095/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724099721449068092" to /tmp/TestFunctionalparallelMountCmdany-port1469611095/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724099721449068092" to /tmp/TestFunctionalparallelMountCmdany-port1469611095/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724099721449068092" to /tmp/TestFunctionalparallelMountCmdany-port1469611095/001/test-1724099721449068092
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.639004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 20:35 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 20:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 20:35 test-1724099721449068092
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh cat /mount-9p/test-1724099721449068092
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-915934 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ed510652-ae05-4793-a717-4152cfa336d8] Pending
helpers_test.go:344: "busybox-mount" [ed510652-ae05-4793-a717-4152cfa336d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ed510652-ae05-4793-a717-4152cfa336d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ed510652-ae05-4793-a717-4152cfa336d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004577234s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-915934 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdany-port1469611095/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service list -o json
functional_test.go:1494: Took "612.459219ms" to run "out/minikube-linux-arm64 -p functional-915934 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30374
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30374
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdspecific-port1817325211/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (424.722802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdspecific-port1817325211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "sudo umount -f /mount-9p": exit status 1 (326.617008ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-915934 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdspecific-port1817325211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T" /mount1: exit status 1 (873.437147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T" /mount2
E0819 20:35:32.898785 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-915934 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-915934 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4280572737/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 version -o=json --components: (1.274851591s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915934 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-915934
localhost/kicbase/echo-server:functional-915934
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915934 image ls --format short --alsologtostderr:
I0819 20:35:42.255962 1041149 out.go:345] Setting OutFile to fd 1 ...
I0819 20:35:42.256111 1041149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.256117 1041149 out.go:358] Setting ErrFile to fd 2...
I0819 20:35:42.256124 1041149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.256425 1041149 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
I0819 20:35:42.257256 1041149 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.257404 1041149 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.258103 1041149 cli_runner.go:164] Run: docker container inspect functional-915934 --format={{.State.Status}}
I0819 20:35:42.295520 1041149 ssh_runner.go:195] Run: systemctl --version
I0819 20:35:42.295591 1041149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915934
I0819 20:35:42.335938 1041149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/functional-915934/id_rsa Username:docker}
I0819 20:35:42.434260 1041149 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915934 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy              | v1.31.0            | 71d55d66fd4ee | 95.9MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | d5e283bc63d43 | 90.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | cd0f0ae0ec9e0 | 92.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | a9dfdba8b7190 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-915934  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | fcb0683e6bdbd | 86.9MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | 70594c812316a | 48.4MB |
| localhost/minikube-local-cache-test     | functional-915934  | cef6696a06119 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | fbbbd428abb4d | 67MB   |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915934 image ls --format table --alsologtostderr:
I0819 20:35:42.648339 1041225 out.go:345] Setting OutFile to fd 1 ...
I0819 20:35:42.650754 1041225 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.650760 1041225 out.go:358] Setting ErrFile to fd 2...
I0819 20:35:42.650765 1041225 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.651015 1041225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
I0819 20:35:42.651707 1041225 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.651843 1041225 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.652372 1041225 cli_runner.go:164] Run: docker container inspect functional-915934 --format={{.State.Status}}
I0819 20:35:42.671815 1041225 ssh_runner.go:195] Run: systemctl --version
I0819 20:35:42.671876 1041225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915934
I0819 20:35:42.694630 1041225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/functional-915934/id_rsa Username:docker}
I0819 20:35:42.791615 1041225 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915934 image ls --format json --alsologtostderr:
[{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"86930758"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"95949719"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808","registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a009
36681095842a0d813c70ecc2d4f65f3bd3beef77"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48397013"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:
bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172049"},{"id":"cef6696a06119572a5f38909e97229ed468325470713eb503b6373a34f53f833","repoDigests":["localhost/minikube-local-cache-test@sha256:bbe4b293d85b50481c7ef1d8c02c6f951f5d01183d83ec0a3b1794b69ea39106"],"repoTags":["localhost/minikube-local-cache-test:functional-915934"],"size":"3330"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.
io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"92567005"},{"id":"8cb2091f6
03e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests"
:["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"90290738"},{"id":"ce2d2cda2d858fdae
a84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-915934"],"size":"4788229"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915934 image ls --format json --alsologtostderr:
I0819 20:35:42.561105 1041208 out.go:345] Setting OutFile to fd 1 ...
I0819 20:35:42.561342 1041208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.561367 1041208 out.go:358] Setting ErrFile to fd 2...
I0819 20:35:42.561396 1041208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.561860 1041208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
I0819 20:35:42.563039 1041208 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.563269 1041208 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.564101 1041208 cli_runner.go:164] Run: docker container inspect functional-915934 --format={{.State.Status}}
I0819 20:35:42.593096 1041208 ssh_runner.go:195] Run: systemctl --version
I0819 20:35:42.593174 1041208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915934
I0819 20:35:42.626886 1041208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/functional-915934/id_rsa Username:docker}
I0819 20:35:42.722273 1041208 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915934 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-915934
size: "4788229"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:74a8050ec347821b7884ab635f3e7883b5c570388ed8087ffd01fd9fe1cb39c6
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "92567005"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: cef6696a06119572a5f38909e97229ed468325470713eb503b6373a34f53f833
repoDigests:
- localhost/minikube-local-cache-test@sha256:bbe4b293d85b50481c7ef1d8c02c6f951f5d01183d83ec0a3b1794b69ea39106
repoTags:
- localhost/minikube-local-cache-test:functional-915934
size: "3330"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b7d336a1c5e9719bafe8a97dbb2c503580b5ac898f3f40329fc98f6a1f0ea971
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "95949719"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
- registry.k8s.io/kube-scheduler@sha256:dd427ccac78f027990d5a00936681095842a0d813c70ecc2d4f65f3bd3beef77
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67007814"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:c26d1775b97b4ba3436f3cdc4d5c153b773ce2b3f5ad8e201f16b09e7182d63e
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "90290738"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:ba188f579f7a2638229e326e78c957a185630e303757813ef1ad7aac1b8248b6
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "48397013"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:bab0713884fed8a137ba5bd2d67d218c6192bd79b5a3526d3eb15567e035eb55
repoTags:
- docker.io/library/nginx:latest
size: "197172049"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:ed8613b19e25d56d25e9ba0d83fd1e14f8ba070cb80e2674ba62ded55e260a9c
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "86930758"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915934 image ls --format yaml --alsologtostderr:
I0819 20:35:42.327073 1041150 out.go:345] Setting OutFile to fd 1 ...
I0819 20:35:42.327284 1041150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.327312 1041150 out.go:358] Setting ErrFile to fd 2...
I0819 20:35:42.327346 1041150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:42.327848 1041150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
I0819 20:35:42.329900 1041150 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.330358 1041150 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:42.331363 1041150 cli_runner.go:164] Run: docker container inspect functional-915934 --format={{.State.Status}}
I0819 20:35:42.364742 1041150 ssh_runner.go:195] Run: systemctl --version
I0819 20:35:42.364808 1041150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915934
I0819 20:35:42.391205 1041150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/functional-915934/id_rsa Username:docker}
I0819 20:35:42.483420 1041150 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-915934 ssh pgrep buildkitd: exit status 1 (289.080976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image build -t localhost/my-image:functional-915934 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 image build -t localhost/my-image:functional-915934 testdata/build --alsologtostderr: (2.326571082s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-915934 image build -t localhost/my-image:functional-915934 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 78c5ebc8163
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-915934
--> 8c308279014
Successfully tagged localhost/my-image:functional-915934
8c30827901447cf82d60ab0f70f4966749347b2d58e4b8b9e9947134bdd7bf17
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-915934 image build -t localhost/my-image:functional-915934 testdata/build --alsologtostderr:
I0819 20:35:43.116576 1041338 out.go:345] Setting OutFile to fd 1 ...
I0819 20:35:43.117089 1041338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:43.117105 1041338 out.go:358] Setting ErrFile to fd 2...
I0819 20:35:43.117111 1041338 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 20:35:43.117370 1041338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
I0819 20:35:43.118085 1041338 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:43.118689 1041338 config.go:182] Loaded profile config "functional-915934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 20:35:43.119217 1041338 cli_runner.go:164] Run: docker container inspect functional-915934 --format={{.State.Status}}
I0819 20:35:43.135843 1041338 ssh_runner.go:195] Run: systemctl --version
I0819 20:35:43.135900 1041338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-915934
I0819 20:35:43.153338 1041338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/functional-915934/id_rsa Username:docker}
I0819 20:35:43.242129 1041338 build_images.go:161] Building image from path: /tmp/build.1459664424.tar
I0819 20:35:43.242197 1041338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 20:35:43.251294 1041338 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1459664424.tar
I0819 20:35:43.255025 1041338 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1459664424.tar: stat -c "%s %y" /var/lib/minikube/build/build.1459664424.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1459664424.tar': No such file or directory
I0819 20:35:43.255056 1041338 ssh_runner.go:362] scp /tmp/build.1459664424.tar --> /var/lib/minikube/build/build.1459664424.tar (3072 bytes)
I0819 20:35:43.286028 1041338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1459664424
I0819 20:35:43.298199 1041338 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1459664424 -xf /var/lib/minikube/build/build.1459664424.tar
I0819 20:35:43.307766 1041338 crio.go:315] Building image: /var/lib/minikube/build/build.1459664424
I0819 20:35:43.307841 1041338 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-915934 /var/lib/minikube/build/build.1459664424 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0819 20:35:45.363069 1041338 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-915934 /var/lib/minikube/build/build.1459664424 --cgroup-manager=cgroupfs: (2.055203755s)
I0819 20:35:45.363152 1041338 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1459664424
I0819 20:35:45.374497 1041338 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1459664424.tar
I0819 20:35:45.386706 1041338 build_images.go:217] Built localhost/my-image:functional-915934 from /tmp/build.1459664424.tar
I0819 20:35:45.386746 1041338 build_images.go:133] succeeded building to: functional-915934
I0819 20:35:45.386753 1041338 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-915934
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image load --daemon kicbase/echo-server:functional-915934 --alsologtostderr
2024/08/19 20:35:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-915934 image load --daemon kicbase/echo-server:functional-915934 --alsologtostderr: (1.282082819s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image load --daemon kicbase/echo-server:functional-915934 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-915934
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image load --daemon kicbase/echo-server:functional-915934 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image save kicbase/echo-server:functional-915934 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image rm kicbase/echo-server:functional-915934 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-915934
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-915934 image save --daemon kicbase/echo-server:functional-915934 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-915934
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-915934
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-915934
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-915934
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-876838 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 20:36:13.861235 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:37:35.783509 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-876838 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.014694483s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (174.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-876838 -- rollout status deployment/busybox: (3.878571542s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-6klbz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-996zv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-vwtq8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-6klbz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-996zv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-vwtq8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-6klbz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-996zv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-vwtq8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-6klbz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-6klbz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-996zv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-996zv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-vwtq8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-876838 -- exec busybox-7dff88458-vwtq8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-876838 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-876838 -v=7 --alsologtostderr: (34.30018984s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr: (1.004853577s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-876838 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp testdata/cp-test.txt ha-876838:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216690093/001/cp-test_ha-876838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838:/home/docker/cp-test.txt ha-876838-m02:/home/docker/cp-test_ha-876838_ha-876838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test_ha-876838_ha-876838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838:/home/docker/cp-test.txt ha-876838-m03:/home/docker/cp-test_ha-876838_ha-876838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test_ha-876838_ha-876838-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838:/home/docker/cp-test.txt ha-876838-m04:/home/docker/cp-test_ha-876838_ha-876838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test_ha-876838_ha-876838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp testdata/cp-test.txt ha-876838-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216690093/001/cp-test_ha-876838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m02:/home/docker/cp-test.txt ha-876838:/home/docker/cp-test_ha-876838-m02_ha-876838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test_ha-876838-m02_ha-876838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m02:/home/docker/cp-test.txt ha-876838-m03:/home/docker/cp-test_ha-876838-m02_ha-876838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test_ha-876838-m02_ha-876838-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m02:/home/docker/cp-test.txt ha-876838-m04:/home/docker/cp-test_ha-876838-m02_ha-876838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test_ha-876838-m02_ha-876838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp testdata/cp-test.txt ha-876838-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216690093/001/cp-test_ha-876838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m03:/home/docker/cp-test.txt ha-876838:/home/docker/cp-test_ha-876838-m03_ha-876838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test_ha-876838-m03_ha-876838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m03:/home/docker/cp-test.txt ha-876838-m02:/home/docker/cp-test_ha-876838-m03_ha-876838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test_ha-876838-m03_ha-876838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m03:/home/docker/cp-test.txt ha-876838-m04:/home/docker/cp-test_ha-876838-m03_ha-876838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test_ha-876838-m03_ha-876838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp testdata/cp-test.txt ha-876838-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1216690093/001/cp-test_ha-876838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt ha-876838:/home/docker/cp-test_ha-876838-m04_ha-876838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838 "sudo cat /home/docker/cp-test_ha-876838-m04_ha-876838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt ha-876838-m02:/home/docker/cp-test_ha-876838-m04_ha-876838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m02 "sudo cat /home/docker/cp-test_ha-876838-m04_ha-876838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 cp ha-876838-m04:/home/docker/cp-test.txt ha-876838-m03:/home/docker/cp-test_ha-876838-m04_ha-876838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 ssh -n ha-876838-m03 "sudo cat /home/docker/cp-test_ha-876838-m04_ha-876838-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 node stop m02 -v=7 --alsologtostderr
E0819 20:39:51.919968 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.553166 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.559624 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.571100 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.592617 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.634022 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.715449 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:53.877029 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:54.198674 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:54.840654 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:56.121934 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:39:58.683285 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 node stop m02 -v=7 --alsologtostderr: (11.969316032s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr: exit status 7 (971.113049ms)

                                                
                                                
-- stdout --
	ha-876838
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-876838-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876838-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-876838-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:39:59.355013 1057062 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:39:59.355212 1057062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:39:59.355237 1057062 out.go:358] Setting ErrFile to fd 2...
	I0819 20:39:59.355258 1057062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:39:59.355568 1057062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:39:59.355805 1057062 out.go:352] Setting JSON to false
	I0819 20:39:59.355871 1057062 mustload.go:65] Loading cluster: ha-876838
	I0819 20:39:59.355956 1057062 notify.go:220] Checking for updates...
	I0819 20:39:59.356366 1057062 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:39:59.356402 1057062 status.go:255] checking status of ha-876838 ...
	I0819 20:39:59.356962 1057062 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:39:59.378594 1057062 status.go:330] ha-876838 host status = "Running" (err=<nil>)
	I0819 20:39:59.378619 1057062 host.go:66] Checking if "ha-876838" exists ...
	I0819 20:39:59.378934 1057062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838
	I0819 20:39:59.403093 1057062 host.go:66] Checking if "ha-876838" exists ...
	I0819 20:39:59.403402 1057062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:39:59.403463 1057062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838
	I0819 20:39:59.422001 1057062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33913 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838/id_rsa Username:docker}
	I0819 20:39:59.515293 1057062 ssh_runner.go:195] Run: systemctl --version
	I0819 20:39:59.519904 1057062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:39:59.532183 1057062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:39:59.594768 1057062 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 20:39:59.584812428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:39:59.595339 1057062 kubeconfig.go:125] found "ha-876838" server: "https://192.168.49.254:8443"
	I0819 20:39:59.595373 1057062 api_server.go:166] Checking apiserver status ...
	I0819 20:39:59.595421 1057062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:39:59.606635 1057062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1383/cgroup
	I0819 20:39:59.616774 1057062 api_server.go:182] apiserver freezer: "12:freezer:/docker/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/crio/crio-12898d265abcdda72953450281a73053863732d4caa2c62d2e5d545bf535d40d"
	I0819 20:39:59.616841 1057062 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/720c5e391f37748cbcb4912e8ceed34f294ecd901d7b6f0a82f4b0d682eb07d6/crio/crio-12898d265abcdda72953450281a73053863732d4caa2c62d2e5d545bf535d40d/freezer.state
	I0819 20:39:59.625773 1057062 api_server.go:204] freezer state: "THAWED"
	I0819 20:39:59.625802 1057062 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 20:39:59.633715 1057062 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 20:39:59.633742 1057062 status.go:422] ha-876838 apiserver status = Running (err=<nil>)
	I0819 20:39:59.633753 1057062 status.go:257] ha-876838 status: &{Name:ha-876838 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:39:59.633769 1057062 status.go:255] checking status of ha-876838-m02 ...
	I0819 20:39:59.634102 1057062 cli_runner.go:164] Run: docker container inspect ha-876838-m02 --format={{.State.Status}}
	I0819 20:39:59.652258 1057062 status.go:330] ha-876838-m02 host status = "Stopped" (err=<nil>)
	I0819 20:39:59.652279 1057062 status.go:343] host is not running, skipping remaining checks
	I0819 20:39:59.652286 1057062 status.go:257] ha-876838-m02 status: &{Name:ha-876838-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:39:59.652339 1057062 status.go:255] checking status of ha-876838-m03 ...
	I0819 20:39:59.652714 1057062 cli_runner.go:164] Run: docker container inspect ha-876838-m03 --format={{.State.Status}}
	I0819 20:39:59.673945 1057062 status.go:330] ha-876838-m03 host status = "Running" (err=<nil>)
	I0819 20:39:59.673975 1057062 host.go:66] Checking if "ha-876838-m03" exists ...
	I0819 20:39:59.674313 1057062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m03
	I0819 20:39:59.693131 1057062 host.go:66] Checking if "ha-876838-m03" exists ...
	I0819 20:39:59.693528 1057062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:39:59.693585 1057062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m03
	I0819 20:39:59.712914 1057062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33923 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m03/id_rsa Username:docker}
	I0819 20:39:59.807710 1057062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:39:59.820557 1057062 kubeconfig.go:125] found "ha-876838" server: "https://192.168.49.254:8443"
	I0819 20:39:59.820598 1057062 api_server.go:166] Checking apiserver status ...
	I0819 20:39:59.820651 1057062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:39:59.832561 1057062 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1314/cgroup
	I0819 20:39:59.842438 1057062 api_server.go:182] apiserver freezer: "12:freezer:/docker/d818c2c502aa63a6fddbe212bf16878e9864bc6d0f0664f3ff2c01b2b13996f0/crio/crio-8533fd1243ee960c33c015315925866815a826f7f3450e17c02d1e542771a5b9"
	I0819 20:39:59.842561 1057062 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d818c2c502aa63a6fddbe212bf16878e9864bc6d0f0664f3ff2c01b2b13996f0/crio/crio-8533fd1243ee960c33c015315925866815a826f7f3450e17c02d1e542771a5b9/freezer.state
	I0819 20:39:59.852620 1057062 api_server.go:204] freezer state: "THAWED"
	I0819 20:39:59.852649 1057062 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 20:39:59.860821 1057062 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 20:39:59.860850 1057062 status.go:422] ha-876838-m03 apiserver status = Running (err=<nil>)
	I0819 20:39:59.860861 1057062 status.go:257] ha-876838-m03 status: &{Name:ha-876838-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:39:59.860878 1057062 status.go:255] checking status of ha-876838-m04 ...
	I0819 20:39:59.861219 1057062 cli_runner.go:164] Run: docker container inspect ha-876838-m04 --format={{.State.Status}}
	I0819 20:39:59.878965 1057062 status.go:330] ha-876838-m04 host status = "Running" (err=<nil>)
	I0819 20:39:59.878993 1057062 host.go:66] Checking if "ha-876838-m04" exists ...
	I0819 20:39:59.879319 1057062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-876838-m04
	I0819 20:39:59.896786 1057062 host.go:66] Checking if "ha-876838-m04" exists ...
	I0819 20:39:59.897140 1057062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:39:59.897283 1057062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-876838-m04
	I0819 20:39:59.914795 1057062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/ha-876838-m04/id_rsa Username:docker}
	I0819 20:40:00.065477 1057062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:40:00.207804 1057062 status.go:257] ha-876838-m04 status: &{Name:ha-876838-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 node start m02 -v=7 --alsologtostderr
E0819 20:40:03.805046 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:40:14.047001 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:40:19.625860 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 node start m02 -v=7 --alsologtostderr: (32.338289429s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
E0819 20:40:34.528986 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr: (1.13301456s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (9.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (9.287087444s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (9.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-876838 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-876838 -v=7 --alsologtostderr
E0819 20:41:15.491960 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-876838 -v=7 --alsologtostderr: (36.864899956s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-876838 --wait=true -v=7 --alsologtostderr
E0819 20:42:37.414072 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-876838 --wait=true -v=7 --alsologtostderr: (3m6.109626365s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-876838
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (223.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 node delete m03 -v=7 --alsologtostderr: (10.871863703s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 stop -v=7 --alsologtostderr
E0819 20:44:51.920238 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:44:53.552762 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-876838 stop -v=7 --alsologtostderr: (35.659396077s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr: exit status 7 (112.370134ms)

                                                
                                                
-- stdout --
	ha-876838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876838-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-876838-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:45:15.201662 1071683 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:45:15.201868 1071683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:45:15.201899 1071683 out.go:358] Setting ErrFile to fd 2...
	I0819 20:45:15.201924 1071683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:45:15.202263 1071683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:45:15.202520 1071683 out.go:352] Setting JSON to false
	I0819 20:45:15.202604 1071683 mustload.go:65] Loading cluster: ha-876838
	I0819 20:45:15.202688 1071683 notify.go:220] Checking for updates...
	I0819 20:45:15.203165 1071683 config.go:182] Loaded profile config "ha-876838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:45:15.203222 1071683 status.go:255] checking status of ha-876838 ...
	I0819 20:45:15.203783 1071683 cli_runner.go:164] Run: docker container inspect ha-876838 --format={{.State.Status}}
	I0819 20:45:15.224179 1071683 status.go:330] ha-876838 host status = "Stopped" (err=<nil>)
	I0819 20:45:15.224201 1071683 status.go:343] host is not running, skipping remaining checks
	I0819 20:45:15.224209 1071683 status.go:257] ha-876838 status: &{Name:ha-876838 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:45:15.224246 1071683 status.go:255] checking status of ha-876838-m02 ...
	I0819 20:45:15.224614 1071683 cli_runner.go:164] Run: docker container inspect ha-876838-m02 --format={{.State.Status}}
	I0819 20:45:15.245826 1071683 status.go:330] ha-876838-m02 host status = "Stopped" (err=<nil>)
	I0819 20:45:15.245852 1071683 status.go:343] host is not running, skipping remaining checks
	I0819 20:45:15.245859 1071683 status.go:257] ha-876838-m02 status: &{Name:ha-876838-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:45:15.245878 1071683 status.go:255] checking status of ha-876838-m04 ...
	I0819 20:45:15.246207 1071683 cli_runner.go:164] Run: docker container inspect ha-876838-m04 --format={{.State.Status}}
	I0819 20:45:15.263308 1071683 status.go:330] ha-876838-m04 host status = "Stopped" (err=<nil>)
	I0819 20:45:15.263328 1071683 status.go:343] host is not running, skipping remaining checks
	I0819 20:45:15.263335 1071683 status.go:257] ha-876838-m04 status: &{Name:ha-876838-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-876838 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-876838 --control-plane -v=7 --alsologtostderr: (45.366575453s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-876838 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-826422 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-826422 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (52.225844099s)
--- PASS: TestJSONOutput/start/Command (52.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-826422 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-826422 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-826422 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-826422 --output=json --user=testUser: (5.794546579s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-302889 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-302889 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.943292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c814266e-401d-489b-8f8b-59082655a32d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302889] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"673cbc2a-a7b7-41b9-baa6-0dcac5ed3f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"9cd7ed25-e38f-46a5-8a74-fff908c38127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5906d889-c4a5-4de7-9d68-ab39791565a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig"}}
	{"specversion":"1.0","id":"c8d96e7b-39cf-48e9-93e6-d92b7261a72a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube"}}
	{"specversion":"1.0","id":"5c4a1ed9-0036-4ded-ad84-a3346d843089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fc5a439f-2b22-47e8-8332-5ff08f3c649c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed684c9e-dcce-4391-8f73-ed343704cf2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-302889
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-074820 --network=
E0819 20:49:51.920421 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:49:53.552475 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-074820 --network=: (38.743647986s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-074820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-074820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-074820: (2.099422505s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.86s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-839736 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-839736 --network=bridge: (32.102575003s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-839736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-839736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-839736: (2.024492957s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.16s)

                                                
                                    
x
+
TestKicExistingNetwork (33.04s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-574545 --network=existing-network
E0819 20:51:14.987424 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-574545 --network=existing-network: (30.879817122s)
helpers_test.go:175: Cleaning up "existing-network-574545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-574545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-574545: (1.985823876s)
--- PASS: TestKicExistingNetwork (33.04s)

                                                
                                    
x
+
TestKicCustomSubnet (36.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-041786 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-041786 --subnet=192.168.60.0/24: (34.471002169s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-041786 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-041786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-041786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-041786: (2.448007489s)
--- PASS: TestKicCustomSubnet (36.94s)

                                                
                                    
x
+
TestKicStaticIP (32.91s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-683729 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-683729 --static-ip=192.168.200.200: (30.686533653s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-683729 ip
helpers_test.go:175: Cleaning up "static-ip-683729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-683729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-683729: (2.081657631s)
--- PASS: TestKicStaticIP (32.91s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-205859 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-205859 --driver=docker  --container-runtime=crio: (31.354459649s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-208985 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-208985 --driver=docker  --container-runtime=crio: (30.820438289s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-205859
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-208985
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-208985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-208985
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-208985: (1.992627005s)
helpers_test.go:175: Cleaning up "first-205859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-205859
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-205859: (2.264904326s)
--- PASS: TestMinikubeProfile (67.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-825193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-825193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.348567726s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-825193 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-838042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-838042 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.379419094s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-838042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-825193 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-825193 --alsologtostderr -v=5: (1.672900245s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-838042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-838042
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-838042: (1.231721275s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-838042
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-838042: (6.909714902s)
--- PASS: TestMountStart/serial/RestartStopped (7.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-838042 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 20:54:51.919529 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:54:53.552283 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728402 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.724845264s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-728402 -- rollout status deployment/busybox: (3.029279564s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-x59j2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-xscln -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-x59j2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-xscln -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-x59j2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-xscln -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-x59j2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-x59j2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-xscln -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-728402 -- exec busybox-7dff88458-xscln -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-728402 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-728402 -v 3 --alsologtostderr: (30.613524929s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-728402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp testdata/cp-test.txt multinode-728402:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1318291952/001/cp-test_multinode-728402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402:/home/docker/cp-test.txt multinode-728402-m02:/home/docker/cp-test_multinode-728402_multinode-728402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test_multinode-728402_multinode-728402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402:/home/docker/cp-test.txt multinode-728402-m03:/home/docker/cp-test_multinode-728402_multinode-728402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test_multinode-728402_multinode-728402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp testdata/cp-test.txt multinode-728402-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1318291952/001/cp-test_multinode-728402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test.txt"
E0819 20:56:16.617737 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m02:/home/docker/cp-test.txt multinode-728402:/home/docker/cp-test_multinode-728402-m02_multinode-728402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test_multinode-728402-m02_multinode-728402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m02:/home/docker/cp-test.txt multinode-728402-m03:/home/docker/cp-test_multinode-728402-m02_multinode-728402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test_multinode-728402-m02_multinode-728402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp testdata/cp-test.txt multinode-728402-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1318291952/001/cp-test_multinode-728402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m03:/home/docker/cp-test.txt multinode-728402:/home/docker/cp-test_multinode-728402-m03_multinode-728402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402 "sudo cat /home/docker/cp-test_multinode-728402-m03_multinode-728402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 cp multinode-728402-m03:/home/docker/cp-test.txt multinode-728402-m02:/home/docker/cp-test_multinode-728402-m03_multinode-728402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 ssh -n multinode-728402-m02 "sudo cat /home/docker/cp-test_multinode-728402-m03_multinode-728402-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-728402 node stop m03: (1.206173975s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728402 status: exit status 7 (498.419089ms)

                                                
                                                
-- stdout --
	multinode-728402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr: exit status 7 (505.729439ms)

                                                
                                                
-- stdout --
	multinode-728402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-728402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-728402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:56:23.720633 1126330 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:56:23.720773 1126330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:56:23.720783 1126330 out.go:358] Setting ErrFile to fd 2...
	I0819 20:56:23.720789 1126330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:56:23.721017 1126330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:56:23.721200 1126330 out.go:352] Setting JSON to false
	I0819 20:56:23.721244 1126330 mustload.go:65] Loading cluster: multinode-728402
	I0819 20:56:23.721351 1126330 notify.go:220] Checking for updates...
	I0819 20:56:23.721711 1126330 config.go:182] Loaded profile config "multinode-728402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:56:23.721723 1126330 status.go:255] checking status of multinode-728402 ...
	I0819 20:56:23.722215 1126330 cli_runner.go:164] Run: docker container inspect multinode-728402 --format={{.State.Status}}
	I0819 20:56:23.740697 1126330 status.go:330] multinode-728402 host status = "Running" (err=<nil>)
	I0819 20:56:23.740719 1126330 host.go:66] Checking if "multinode-728402" exists ...
	I0819 20:56:23.741136 1126330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-728402
	I0819 20:56:23.758048 1126330 host.go:66] Checking if "multinode-728402" exists ...
	I0819 20:56:23.758398 1126330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:56:23.758451 1126330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-728402
	I0819 20:56:23.785296 1126330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34033 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/multinode-728402/id_rsa Username:docker}
	I0819 20:56:23.879091 1126330 ssh_runner.go:195] Run: systemctl --version
	I0819 20:56:23.883365 1126330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:56:23.895655 1126330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 20:56:23.950122 1126330 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 20:56:23.939357488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 20:56:23.950715 1126330 kubeconfig.go:125] found "multinode-728402" server: "https://192.168.67.2:8443"
	I0819 20:56:23.950752 1126330 api_server.go:166] Checking apiserver status ...
	I0819 20:56:23.950803 1126330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 20:56:23.963130 1126330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	I0819 20:56:23.972814 1126330 api_server.go:182] apiserver freezer: "12:freezer:/docker/b4c3f7dfbc9f7a2b97f6e5fab0e965b6c2fb4f729a64c232932514838ed2a1ce/crio/crio-018a6de2a9a144c25c71493cbc13e0f50f6c65a6b7d1505523eaed42af6b47aa"
	I0819 20:56:23.972888 1126330 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b4c3f7dfbc9f7a2b97f6e5fab0e965b6c2fb4f729a64c232932514838ed2a1ce/crio/crio-018a6de2a9a144c25c71493cbc13e0f50f6c65a6b7d1505523eaed42af6b47aa/freezer.state
	I0819 20:56:23.981507 1126330 api_server.go:204] freezer state: "THAWED"
	I0819 20:56:23.981549 1126330 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 20:56:23.989533 1126330 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 20:56:23.989565 1126330 status.go:422] multinode-728402 apiserver status = Running (err=<nil>)
	I0819 20:56:23.989576 1126330 status.go:257] multinode-728402 status: &{Name:multinode-728402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:56:23.989701 1126330 status.go:255] checking status of multinode-728402-m02 ...
	I0819 20:56:23.990061 1126330 cli_runner.go:164] Run: docker container inspect multinode-728402-m02 --format={{.State.Status}}
	I0819 20:56:24.009323 1126330 status.go:330] multinode-728402-m02 host status = "Running" (err=<nil>)
	I0819 20:56:24.009368 1126330 host.go:66] Checking if "multinode-728402-m02" exists ...
	I0819 20:56:24.009913 1126330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-728402-m02
	I0819 20:56:24.031563 1126330 host.go:66] Checking if "multinode-728402-m02" exists ...
	I0819 20:56:24.032060 1126330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 20:56:24.032123 1126330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-728402-m02
	I0819 20:56:24.051031 1126330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34038 SSHKeyPath:/home/jenkins/minikube-integration/19423-1006087/.minikube/machines/multinode-728402-m02/id_rsa Username:docker}
	I0819 20:56:24.143432 1126330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 20:56:24.156404 1126330 status.go:257] multinode-728402-m02 status: &{Name:multinode-728402-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:56:24.156440 1126330 status.go:255] checking status of multinode-728402-m03 ...
	I0819 20:56:24.156771 1126330 cli_runner.go:164] Run: docker container inspect multinode-728402-m03 --format={{.State.Status}}
	I0819 20:56:24.173783 1126330 status.go:330] multinode-728402-m03 host status = "Stopped" (err=<nil>)
	I0819 20:56:24.173805 1126330 status.go:343] host is not running, skipping remaining checks
	I0819 20:56:24.173814 1126330 status.go:257] multinode-728402-m03 status: &{Name:multinode-728402-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-728402 node start m03 -v=7 --alsologtostderr: (9.177856706s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728402
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-728402
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-728402: (24.792648637s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728402 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728402 --wait=true -v=8 --alsologtostderr: (1m17.165234129s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728402
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-728402 node delete m03: (4.893826501s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-728402 stop: (23.776287179s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728402 status: exit status 7 (94.161701ms)

                                                
                                                
-- stdout --
	multinode-728402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr: exit status 7 (92.9671ms)

                                                
                                                
-- stdout --
	multinode-728402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-728402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 20:58:45.675178 1134145 out.go:345] Setting OutFile to fd 1 ...
	I0819 20:58:45.675407 1134145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:58:45.675433 1134145 out.go:358] Setting ErrFile to fd 2...
	I0819 20:58:45.675452 1134145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 20:58:45.675748 1134145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 20:58:45.676019 1134145 out.go:352] Setting JSON to false
	I0819 20:58:45.676088 1134145 mustload.go:65] Loading cluster: multinode-728402
	I0819 20:58:45.676183 1134145 notify.go:220] Checking for updates...
	I0819 20:58:45.676571 1134145 config.go:182] Loaded profile config "multinode-728402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 20:58:45.676605 1134145 status.go:255] checking status of multinode-728402 ...
	I0819 20:58:45.677155 1134145 cli_runner.go:164] Run: docker container inspect multinode-728402 --format={{.State.Status}}
	I0819 20:58:45.695060 1134145 status.go:330] multinode-728402 host status = "Stopped" (err=<nil>)
	I0819 20:58:45.695080 1134145 status.go:343] host is not running, skipping remaining checks
	I0819 20:58:45.695088 1134145 status.go:257] multinode-728402 status: &{Name:multinode-728402 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 20:58:45.695110 1134145 status.go:255] checking status of multinode-728402-m02 ...
	I0819 20:58:45.695455 1134145 cli_runner.go:164] Run: docker container inspect multinode-728402-m02 --format={{.State.Status}}
	I0819 20:58:45.721850 1134145 status.go:330] multinode-728402-m02 host status = "Stopped" (err=<nil>)
	I0819 20:58:45.721877 1134145 status.go:343] host is not running, skipping remaining checks
	I0819 20:58:45.721886 1134145 status.go:257] multinode-728402-m02 status: &{Name:multinode-728402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728402 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728402 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.942731073s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-728402 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-728402
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728402-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-728402-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.700597ms)

                                                
                                                
-- stdout --
	* [multinode-728402-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-728402-m02' is duplicated with machine name 'multinode-728402-m02' in profile 'multinode-728402'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-728402-m03 --driver=docker  --container-runtime=crio
E0819 20:59:51.919818 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 20:59:53.553009 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-728402-m03 --driver=docker  --container-runtime=crio: (34.097696716s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-728402
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-728402: exit status 80 (330.524818ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-728402 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-728402-m03 already exists in multinode-728402-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-728402-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-728402-m03: (1.985776628s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                    
x
+
TestPreload (139.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-237885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-237885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.890732333s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-237885 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-237885 image pull gcr.io/k8s-minikube/busybox: (1.826081937s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-237885
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-237885: (5.7831218s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-237885 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-237885 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (33.460192742s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-237885 image list
helpers_test.go:175: Cleaning up "test-preload-237885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-237885
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-237885: (2.372351191s)
--- PASS: TestPreload (139.65s)

                                                
                                    
x
+
TestScheduledStopUnix (105.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-309031 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-309031 --memory=2048 --driver=docker  --container-runtime=crio: (29.098679235s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-309031 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-309031 -n scheduled-stop-309031
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-309031 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-309031 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-309031 -n scheduled-stop-309031
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-309031
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-309031 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-309031
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-309031: exit status 7 (70.481594ms)

                                                
                                                
-- stdout --
	scheduled-stop-309031
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-309031 -n scheduled-stop-309031
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-309031 -n scheduled-stop-309031: exit status 7 (74.348733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-309031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-309031
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-309031: (4.65255845s)
--- PASS: TestScheduledStopUnix (105.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-076910 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-076910 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.077269319s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9cc5e872-2fff-4dc7-b38c-83b12b4b96d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-076910] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce87fb98-263d-4bed-9ad8-f81630f04d55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"3ff49268-6012-4318-8743-1809f1f8ede4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14c11cb6-888d-4719-bedf-997b27819933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig"}}
	{"specversion":"1.0","id":"c310a9dc-dff0-44d0-80db-3e8d2cc1632f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube"}}
	{"specversion":"1.0","id":"6750ae4f-721a-4ed3-94b8-8cab81dafa6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3c5d4e81-06a8-4ae2-9846-7d82fadf1553","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d350cf9-c6ff-404b-9450-e0298b3c88c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"96739337-914c-4266-827a-2cfbe475018a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ed5cbd73-a98a-4890-b1c4-360c3411ee33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a545cb6f-cdf0-47ca-9e44-e73423625cd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"595acf81-12ac-4001-932f-e338649f99c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-076910\" primary control-plane node in \"insufficient-storage-076910\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8de48470-adbe-416f-9310-b35ecf266376","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17dca912-a1c1-4610-9a9d-3415795bf366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"170b64b0-291e-4c3c-b662-6589314d9e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-076910 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-076910 --output=json --layout=cluster: exit status 7 (306.322589ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-076910","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-076910","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 21:04:34.330176 1151936 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-076910" does not appear in /home/jenkins/minikube-integration/19423-1006087/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-076910 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-076910 --output=json --layout=cluster: exit status 7 (294.95481ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-076910","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-076910","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 21:04:34.624284 1151999 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-076910" does not appear in /home/jenkins/minikube-integration/19423-1006087/kubeconfig
	E0819 21:04:34.634174 1151999 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/insufficient-storage-076910/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-076910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-076910
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-076910: (1.941909306s)
--- PASS: TestInsufficientStorage (10.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3950655564 start -p running-upgrade-825138 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3950655564 start -p running-upgrade-825138 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.327247635s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-825138 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 21:09:51.919477 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:09:53.553278 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-825138 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.221318162s)
helpers_test.go:175: Cleaning up "running-upgrade-825138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-825138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-825138: (3.384217512s)
--- PASS: TestRunningBinaryUpgrade (77.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m17.920112721s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-823896
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-823896: (1.524985771s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-823896 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-823896 status --format={{.Host}}: exit status 7 (103.696552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.968752869s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-823896 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (106.473024ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-823896] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-823896
	    minikube start -p kubernetes-upgrade-823896 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8238962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-823896 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-823896 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.883775688s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-823896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-823896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-823896: (2.511158652s)
--- PASS: TestKubernetesUpgrade (391.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3270836866 start -p missing-upgrade-017461 --memory=2200 --driver=docker  --container-runtime=crio
E0819 21:04:51.919619 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:04:53.552934 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3270836866 start -p missing-upgrade-017461 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.047687094s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-017461
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-017461: (10.465721833s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-017461
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-017461 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-017461 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.563743826s)
helpers_test.go:175: Cleaning up "missing-upgrade-017461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-017461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-017461: (2.067835676s)
--- PASS: TestMissingContainerUpgrade (158.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.352961ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-327235] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327235 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327235 --driver=docker  --container-runtime=crio: (40.464388721s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-327235 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --driver=docker  --container-runtime=crio: (7.217367013s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-327235 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-327235 status -o json: exit status 2 (377.509055ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-327235","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-327235
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-327235: (2.073564087s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327235 --no-kubernetes --driver=docker  --container-runtime=crio: (7.918728465s)
--- PASS: TestNoKubernetes/serial/Start (7.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-327235 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-327235 "sudo systemctl is-active --quiet service kubelet": exit status 1 (355.128956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-327235
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-327235: (1.287122028s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327235 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327235 --driver=docker  --container-runtime=crio: (8.117813422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-327235 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-327235 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.007602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3194856065 start -p stopped-upgrade-314340 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0819 21:07:54.989721 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3194856065 start -p stopped-upgrade-314340 --memory=2200 --vm-driver=docker  --container-runtime=crio: (52.473572277s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3194856065 -p stopped-upgrade-314340 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3194856065 -p stopped-upgrade-314340 stop: (2.809812657s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-314340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-314340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.749220588s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-314340
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-314340: (1.228444434s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestPause/serial/Start (51.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-543868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-543868 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.84800742s)
--- PASS: TestPause/serial/Start (51.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-543868 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-543868 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.034078653s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.05s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-543868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-543868 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-543868 --output=json --layout=cluster: exit status 2 (374.743253ms)

                                                
                                                
-- stdout --
	{"Name":"pause-543868","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-543868","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-543868 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-543868 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-543868 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-543868 --alsologtostderr -v=5: (2.633761121s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-543868
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-543868: exit status 1 (16.278732ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-543868: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-116466 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-116466 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (257.070366ms)

                                                
                                                
-- stdout --
	* [false-116466] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 21:12:15.557089 1192137 out.go:345] Setting OutFile to fd 1 ...
	I0819 21:12:15.557305 1192137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:12:15.557332 1192137 out.go:358] Setting ErrFile to fd 2...
	I0819 21:12:15.557351 1192137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 21:12:15.557673 1192137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1006087/.minikube/bin
	I0819 21:12:15.558238 1192137 out.go:352] Setting JSON to false
	I0819 21:12:15.559678 1192137 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17677,"bootTime":1724084259,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0819 21:12:15.559776 1192137 start.go:139] virtualization:  
	I0819 21:12:15.563149 1192137 out.go:177] * [false-116466] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 21:12:15.566728 1192137 out.go:177]   - MINIKUBE_LOCATION=19423
	I0819 21:12:15.566827 1192137 notify.go:220] Checking for updates...
	I0819 21:12:15.573513 1192137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 21:12:15.576187 1192137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1006087/kubeconfig
	I0819 21:12:15.578879 1192137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1006087/.minikube
	I0819 21:12:15.581727 1192137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 21:12:15.584414 1192137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 21:12:15.587884 1192137 config.go:182] Loaded profile config "kubernetes-upgrade-823896": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 21:12:15.588094 1192137 driver.go:394] Setting default libvirt URI to qemu:///system
	I0819 21:12:15.637810 1192137 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 21:12:15.637927 1192137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 21:12:15.748510 1192137 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 21:12:15.734360594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 21:12:15.748629 1192137 docker.go:307] overlay module found
	I0819 21:12:15.751610 1192137 out.go:177] * Using the docker driver based on user configuration
	I0819 21:12:15.754217 1192137 start.go:297] selected driver: docker
	I0819 21:12:15.754242 1192137 start.go:901] validating driver "docker" against <nil>
	I0819 21:12:15.754258 1192137 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 21:12:15.757766 1192137 out.go:201] 
	W0819 21:12:15.760402 1192137 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 21:12:15.762988 1192137 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-116466 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 21:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-823896
contexts:
- context:
cluster: kubernetes-upgrade-823896
extensions:
- extension:
last-update: Mon, 19 Aug 2024 21:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-823896
name: kubernetes-upgrade-823896
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-823896
user:
client-certificate: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/kubernetes-upgrade-823896/client.crt
client-key: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/kubernetes-upgrade-823896/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-116466

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-116466"

                                                
                                                
----------------------- debugLogs end: false-116466 [took: 4.85413604s] --------------------------------
helpers_test.go:175: Cleaning up "false-116466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-116466
--- PASS: TestNetworkPlugins/group/false (5.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (180.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-038334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 21:14:51.919962 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:14:53.553223 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-038334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m0.578664533s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (180.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-038334 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [27e1ee06-5dbc-458d-8828-c35722e8adea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [27e1ee06-5dbc-458d-8828-c35722e8adea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005098962s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-038334 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-018337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-018337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (1m10.998630972s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-038334 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-038334 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.964906629s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-038334 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-038334 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-038334 --alsologtostderr -v=3: (13.717555001s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-038334 -n old-k8s-version-038334
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-038334 -n old-k8s-version-038334: exit status 7 (142.832126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-038334 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (371.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-038334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-038334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (6m10.950470553s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-038334 -n old-k8s-version-038334
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (371.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018337 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a3da1d44-6e48-464e-8bec-5002f7faf2f2] Pending
helpers_test.go:344: "busybox" [a3da1d44-6e48-464e-8bec-5002f7faf2f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a3da1d44-6e48-464e-8bec-5002f7faf2f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00343907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-018337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-018337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-018337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.062624098s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-018337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-018337 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-018337 --alsologtostderr -v=3: (12.011368049s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018337 -n no-preload-018337
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018337 -n no-preload-018337: exit status 7 (72.215572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-018337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-018337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 21:19:51.919589 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:19:53.552823 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-018337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m26.677713032s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-018337 -n no-preload-018337
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w6fx5" [e94cbe88-a1cf-4fe3-b603-8ef024f6bf73] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003860211s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w6fx5" [e94cbe88-a1cf-4fe3-b603-8ef024f6bf73] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003938158s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-018337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-018337 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-018337 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018337 -n no-preload-018337
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018337 -n no-preload-018337: exit status 2 (328.776547ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-018337 -n no-preload-018337
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-018337 -n no-preload-018337: exit status 2 (320.310198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-018337 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-018337 -n no-preload-018337
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-018337 -n no-preload-018337
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-415518 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-415518 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (55.380741705s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5wgk7" [5c3109e9-6f4d-4aad-ba6b-d1523b510221] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004496348s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5wgk7" [5c3109e9-6f4d-4aad-ba6b-d1523b510221] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006974471s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-038334 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-038334 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-038334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-038334 --alsologtostderr -v=1: (1.11227117s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-038334 -n old-k8s-version-038334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-038334 -n old-k8s-version-038334: exit status 2 (362.148122ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-038334 -n old-k8s-version-038334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-038334 -n old-k8s-version-038334: exit status 2 (366.357683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-038334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-038334 --alsologtostderr -v=1: (1.018791413s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-038334 -n old-k8s-version-038334
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-038334 -n old-k8s-version-038334
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-099685 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-099685 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (52.752726341s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-415518 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5dc0217-2a71-4d8a-bdaf-fcbce5ac5aad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5dc0217-2a71-4d8a-bdaf-fcbce5ac5aad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004596167s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-415518 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-415518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-415518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.438072095s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-415518 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-415518 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-415518 --alsologtostderr -v=3: (12.109690619s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-415518 -n embed-certs-415518
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-415518 -n embed-certs-415518: exit status 7 (86.571796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-415518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-415518 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-415518 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m56.798573773s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-415518 -n embed-certs-415518
E0819 21:29:23.271844 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-099685 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9e8407a-8080-480e-996d-9830b87ee1b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e9e8407a-8080-480e-996d-9830b87ee1b0] Running
E0819 21:24:34.991368 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003669747s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-099685 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-099685 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-099685 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.435732944s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-099685 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-099685 --alsologtostderr -v=3
E0819 21:24:51.919814 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-099685 --alsologtostderr -v=3: (12.888468106s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685: exit status 7 (171.992184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-099685 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-099685 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 21:24:53.552887 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.411102 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.417464 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.428818 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.450355 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.491859 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.573435 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:39.734990 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:40.056915 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:40.698986 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:41.980743 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:44.542041 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:49.663715 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:26:59.905699 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:20.387953 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.239540 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.245958 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.257743 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.279190 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.320815 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.402333 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.564834 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:27:59.886660 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:00.528110 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:01.350062 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:01.810467 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:04.372611 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:09.494745 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:19.737039 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:28:40.219152 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:21.181337 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-099685 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m39.2184231s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (279.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bptmm" [b5761f1b-2720-4136-8464-e5f91b21e1a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003724769s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bptmm" [b5761f1b-2720-4136-8464-e5f91b21e1a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004402713s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-415518 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5sztv" [99e60e52-b1e4-4931-859b-b0f56cecd108] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005203866s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-415518 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-415518 --alsologtostderr -v=1
E0819 21:29:36.621387 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-415518 -n embed-certs-415518
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-415518 -n embed-certs-415518: exit status 2 (314.117345ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-415518 -n embed-certs-415518
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-415518 -n embed-certs-415518: exit status 2 (345.175198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-415518 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-415518 -n embed-certs-415518
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-415518 -n embed-certs-415518
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5sztv" [99e60e52-b1e4-4931-859b-b0f56cecd108] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004084241s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-099685 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-969833 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-969833 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (47.990172444s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-099685 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-099685 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-099685 --alsologtostderr -v=1: (1.026545613s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685: exit status 2 (382.157833ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685: exit status 2 (546.648481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-099685 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-099685 --alsologtostderr -v=1: (1.031108594s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-099685 -n default-k8s-diff-port-099685
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0819 21:29:51.919784 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/addons-199708/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:29:53.552351 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/functional-915934/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (58.15230583s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-969833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-969833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.393405686s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-969833 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-969833 --alsologtostderr -v=3: (1.402428741s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-969833 -n newest-cni-969833
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-969833 -n newest-cni-969833: exit status 7 (147.94052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-969833 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-969833 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 21:30:43.102983 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-969833 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (16.971432011s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-969833 -n newest-cni-969833
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-67l5j" [579b7818-f71a-4934-83f5-e4f6dc251c3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-67l5j" [579b7818-f71a-4934-83f5-e4f6dc251c3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005535613s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-969833 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-969833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-969833 --alsologtostderr -v=1: (1.101516812s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-969833 -n newest-cni-969833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-969833 -n newest-cni-969833: exit status 2 (527.192701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-969833 -n newest-cni-969833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-969833 -n newest-cni-969833: exit status 2 (411.25439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-969833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-969833 -n newest-cni-969833
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-969833 -n newest-cni-969833
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.96s)
E0819 21:36:00.255521 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/auto-116466/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:36:10.497555 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/auto-116466/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.603153326s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0819 21:31:39.410757 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.625897398s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-thdcw" [a742655c-ab13-4a5f-8917-61c7a463b1a5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003815644s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xbkkj" [7443177b-c6ca-4e52-a808-9184a3ad0611] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xbkkj" [7443177b-c6ca-4e52-a808-9184a3ad0611] Running
E0819 21:32:07.113111 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/old-k8s-version-038334/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004949809s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wtv2s" [36f86f57-3b27-42af-b2f0-f4b9cf22ead8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00686284s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.259778s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4fgl" [e3a7832b-0a42-492d-8475-42b99ba805c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b4fgl" [e3a7832b-0a42-492d-8475-42b99ba805c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004882861s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0819 21:33:26.944708 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/no-preload-018337/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m15.298317054s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2fjzk" [35dd1d96-3f19-456e-a1ed-311a39c5ac76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2fjzk" [35dd1d96-3f19-456e-a1ed-311a39c5ac76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004580536s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0819 21:34:28.204991 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.211352 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.222739 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.244118 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.285860 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.367219 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.529135 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:28.851509 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:29.493055 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:30.775146 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
E0819 21:34:33.337456 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.815764101s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sgsrd" [53818de7-13dd-47b6-ae83-09794f86e853] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 21:34:38.459160 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sgsrd" [53818de7-13dd-47b6-ae83-09794f86e853] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005657875s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-116466 exec deployment/netcat -- nslookup kubernetes.default
E0819 21:34:48.700976 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/default-k8s-diff-port-099685/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-116466 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m15.938435928s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lwvfl" [0b38fc25-c461-4a9b-ad10-679e8a5ba51c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004967483s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6tm59" [6e8d157a-c2d1-4a65-9406-cf881004131c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6tm59" [6e8d157a-c2d1-4a65-9406-cf881004131c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.00399208s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-116466 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-116466 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p44vg" [759d9e4d-48e2-477d-b93e-b930891f131b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 21:36:30.979518 1011462 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/auto-116466/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-p44vg" [759d9e4d-48e2-477d-b93e-b930891f131b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004561446s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-116466 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-116466 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-295909 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-295909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-295909
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-563721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-563721
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-116466 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-1006087/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 21:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-823896
contexts:
- context:
cluster: kubernetes-upgrade-823896
extensions:
- extension:
last-update: Mon, 19 Aug 2024 21:11:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: kubernetes-upgrade-823896
name: kubernetes-upgrade-823896
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-823896
user:
client-certificate: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/kubernetes-upgrade-823896/client.crt
client-key: /home/jenkins/minikube-integration/19423-1006087/.minikube/profiles/kubernetes-upgrade-823896/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-116466

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-116466"

                                                
                                                
----------------------- debugLogs end: kubenet-116466 [took: 4.367253205s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-116466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-116466
--- SKIP: TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-116466 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-116466" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-116466

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-116466" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-116466"

                                                
                                                
----------------------- debugLogs end: cilium-116466 [took: 5.724768374s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-116466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-116466
--- SKIP: TestNetworkPlugins/group/cilium (6.10s)

                                                
                                    
Copied to clipboard